A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Learning to Rewrite for Non-Autoregressive Neural Machine Translation
2021
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
unpublished
Non-autoregressive neural machine translation, which decomposes the dependence on previous target tokens from the inputs of the decoder, has achieved impressive inference speedup but at the cost of inferior accuracy. Previous works employ iterative decoding to improve the translation by applying multiple refinement iterations. However, a serious drawback is that these approaches expose the serious weakness in recognizing the erroneous translation pieces. In this paper, we propose an
doi:10.18653/v1/2021.emnlp-main.265
fatcat:qd7hmrpjh5dpvnkmsfbxiu7fjm