A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Efficient handling ofN-gram language models for statistical machine translation
2007
Proceedings of the Second Workshop on Statistical Machine Translation - StatMT '07
unpublished
Statistical machine translation, as well as other areas of human language processing, have recently pushed toward the use of large scale n-gram language models. This paper presents efficient algorithmic and architectural solutions which have been tested within the Moses decoder, an open source toolkit for statistical machine translation. Experiments are reported with a high performing baseline, trained on the Chinese-English NIST 2006 Evaluation task and running on a standard Linux 64-bit PC
doi:10.3115/1626355.1626367
fatcat:4t6rl5bxjrbevmx3ro4t4pgske