A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Exploiting Multilingualism through Multistage Fine-Tuning for Low-Resource Neural Machine Translation
2019
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
This paper highlights the impressive utility of multi-parallel corpora for transfer learning in a one-to-many low-resource neural machine translation (NMT) setting. We report on a systematic comparison of multistage finetuning configurations, consisting of (1) pretraining on an external large (209k-440k) parallel corpus for English and a helping target language, (2) mixed pre-training or fine-tuning on a mixture of the external and low-resource (18k) target parallel corpora, and (3) pure
doi:10.18653/v1/d19-1146
dblp:conf/emnlp/DabreFC19
fatcat:xwdq2gdw7fhyfm4ze7medtzftm