A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Revisiting Low-Resource Neural Machine Translation: A Case Study
2019
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, underperforming phrase-based statistical machine translation (PBSMT) and requiring large amounts of auxiliary data to achieve competitive results. In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. We discuss some pitfalls to be aware of when training low-resource NMT systems, and
doi:10.18653/v1/p19-1021
dblp:conf/acl/SennrichZ19
fatcat:euk3jr6jrff6pgqxm2lbbyy7nm