Efficient handling ofN-gram language models for statistical machine translation

Marcello Federico, Mauro Cettolo
2007 Proceedings of the Second Workshop on Statistical Machine Translation - StatMT '07   unpublished
Statistical machine translation, as well as other areas of human language processing, have recently pushed toward the use of large scale n-gram language models. This paper presents efficient algorithmic and architectural solutions which have been tested within the Moses decoder, an open source toolkit for statistical machine translation. Experiments are reported with a high performing baseline, trained on the Chinese-English NIST 2006 Evaluation task and running on a standard Linux 64-bit PC
more » ... hitecture. Comparative tests show that our representation halves the memory required by SRI LM Toolkit, at the cost of 44% slower translation speed. However, as it can take advantage of memory mapping on disk, the proposed implementation seems to scale-up much better to very large language models: decoding with a 289-million 5-gram language model runs in 2.1Gb of RAM.
doi:10.3115/1626355.1626367 fatcat:4t6rl5bxjrbevmx3ro4t4pgske