A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
2021
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
unpublished
Reproducible benchmarks are crucial in driving progress of machine translation research. However, existing machine translation benchmarks have been mostly limited to highresource or well-represented languages. Despite an increasing interest in low-resource machine translation, there are no standardized reproducible benchmarks for many African languages, many of which are used by millions of speakers but have less digitized textual data. To tackle these challenges, we propose AFROMT, a
doi:10.18653/v1/2021.emnlp-main.99
fatcat:wesp2gfzb5efjai6my7f5aoj6a