Autores
Tonja Atnafu Lambebo
Kolesnikova Olga
Sidorov Grigori
Gelbukh Alexander
Título Parallel Corpus for Indigenous Language Translation: Spanish-Mazatec and Spanish-Mixtec
Tipo Congreso
Sub-tipo Memoria
Descripción 3rd Workshop on Natural Language Processing for Indigenous Languages of the Americas, AmericasNLP 2023, co-located with ACL 2023
Resumen In this paper, we present a parallel Spanish-Mazatec and Spanish-Mixtec corpus for machine translation (MT) tasks, where Mazatec and Mixtec are two indigenous Mexican languages. We evaluated the usability of the collected corpus using three different approaches: transformer, transfer learning, and fine-tuning pre-trained multilingual MT models. Fine-tuning the Facebook M2M100-48 model outperformed the other approaches, with BLEU scores of 12.09 and 22.25 for Mazatec-Spanish and Spanish-Mazatec translations, respectively, and 16.75 and 22.15 for Mixtec-Spanish and Spanish-Mixtec translations, respectively. The findings show that the dataset size (9,799 sentences in Mazatec and 13,235 sentences in Mixtec) affects translation performance and that indigenous languages work better when used as target languages. The findings emphasize the importance of creating parallel corpora for indigenous languages and fine-tuning models for low-resource translation tasks. Future research will investigate zero-shot and few-shot learning approaches to further improve translation performance in low-resource settings. The dataset and scripts are available at https://github.com/atnafuatx/ Machine-Translation-Resources. © 2023 Association for Computational Linguistics.
Observaciones
Lugar Toronto
País Canada
No. de páginas 94-102
Vol. / Cap.
Inicio 2023-07-14
Fin
ISBN/ISSN 9781959429913