Classical Structured Prediction Losses for Sequence to Sequence Learning

Sergey Edunov, Myle Ott, Michael Auli, David Grangier, Marc'Aurelio Ranzato
2018 Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)  
There has been much recent work on training neural attention models at the sequencelevel using either reinforcement learning-style methods or by optimizing the beam. In this paper, we survey a range of classical objective functions that have been widely used to train linear models for structured prediction and apply them to neural sequence to sequence models. Our experiments show that these losses can perform surprisingly well by slightly outperforming beam search optimization in a like for
more » ... setup. We also report new state of the art results on both IWSLT'14 German-English translation as well as Gigaword abstractive summarization. On the large WMT'14 English-French task, sequence-level training achieves 41.5 BLEU which is on par with the state of the art. 1
doi:10.18653/v1/n18-1033 dblp:conf/naacl/EdunovOAGR18 fatcat:b6csjqyzjze3dae4vnga7b3b2e