Autores
Gelbukh Alexander
Título Comparison of Text Classification Methods Using Deep Learning Neural Networks
Tipo Congreso
Sub-tipo Memoria
Descripción 20th International Conference on Computational Linguistics and Intelligent Text Processing, CICLing 2019
Resumen In this article, we tend to examine the text classification task by using various neural networks. A small number of previously classified texts can change the accuracy of the studied text classifiers. This is often vital in many text classification applications because an oversized range of uncategorized data is effortlessly reachable. However, getting an annotated text is a quite challenging task. The article additionally demonstrates that the Convolution Neural Network (CNN) does not demand semantic or syntactic knowledge and can perform in a better way on a words level. Secondly, a Recurrent Neural Network (RNN) model can effectively classify the text data (sequence type). RNN outperforms the other Neural Networks for the sequence test classification task. We used corpora of two different types from separate sources (IMDB and self-created bloggers corpus). The results of our experiments provide evidence that vector representation of the text can improve the score of the task. © 2023, Springer Nature Switzerland AG.
Observaciones DOI 10.1007/978-3-031-24340-0_33 Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v. 13452
Lugar La Rochelle
País Francia
No. de páginas 438-450
Vol. / Cap. 13452 LNCS
Inicio 2019-04-06
Fin 2019-04-13
ISBN/ISSN