Human or Neural Translation?

Shivendra Bhardwaj, David Alfonso Hermelo, Phillippe Langlais, Gabriel Bernier-Colborne, Cyril Goutte, Michel Simard
2020 Proceedings of the 28th International Conference on Computational Linguistics   unpublished
Deep neural models tremendously improved machine translation. In this context, we investigate whether distinguishing machine from human translations is still feasible. We trained and applied 18 classifiers under two settings: a monolingual task, in which the classifier only looks at the (French) translation; and a bilingual task, in which the source text (in English) is also taken into consideration. We report on extensive experiments involving 4 neural MT systems (Google Translate, DeepL, as
more » ... nslate, DeepL, as well as two systems we trained) and varying the domain of texts. We show that the bilingual task is the easiest one and that transfer-based deep-learning classifiers perform best, with mean accuracies around 85% in-domain and 75% out-of-domain. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// 3 4 5 6 7 The model without pre-training was unstable. We noticed better results with a long-running back-translation step. 8 9 10
doi:10.18653/v1/2020.coling-main.576 fatcat:rv2m5hwpojbn7gelzm26zjirv4