Autores
Lukashchuk Mykola
Gelbukh Alexander
Sidorov Grigori
Título Prior latent distribution comparison for the RNN Variational Autoencoder in low-resource language modeling
Tipo Revista
Sub-tipo JCR
Descripción Journal of Intelligent and Fuzzy Systems
Resumen Probabilistic Bayesian methods are widely used in the machine learning domain. Variational Autoencoder (VAE) is a common architecture for solving the Language Modeling task in a self-supervised way. VAE consists of a concept of latent variables inside the model. Latent variables are described as a random variable that is fit by the data. Up to now, in the majority of cases, latent variables are considered normally distributed. The normal distribution is a well-known distribution that can be easily included in any pipeline. Moreover, the normal distribution is a good choice when the Central Limit Theorem (CLT) holds. It makes it effective when one is working with i.i.d. (independent and identically distributed) random variables. However, the conditions of CLT in Natural Language Processing are not easy to check. So, the choice of distribution family is unclear in the domain. This paper studies the priors selection impact of continuous distributions in the Low-Resource Language Modeling task with VAE. The experiment shows that there is a statistical difference between the different priors in the encoder-decoder architecture. We showed that family distribution hyperparameter is important in the Low-Resource Language Modeling task and should be considered for the model training. © 2022 - IOS Press. All rights reserved.
Observaciones DOI 10.3233/JIFS-219243
Lugar Amsterdam
País Paises Bajos
No. de páginas 4541-4549
Vol. / Cap. v. 42 no. 5
Inicio 2022-03-31
Fin
ISBN/ISSN