Context dependent language model adaptation

X. Liu, M. J. F. Gales, P. C. Woodland
2008 Interspeech 2008   unpublished
Language models (LMs) are often constructed by building multiple component LMs that are combined using interpolation weights. By tuning these interpolation weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. In this work, improved LM adaptation is achieved by introducing context dependent interpolation weights. An important part of this new approach is obtaining robust estimation. Two schemes for this are described. The first is based
more » ... on MAP estimation, where either global interpolation weights are used as priors, or context dependent interpolation priors obtained from the training data. The second scheme uses class based contexts to determine the interpolation weights. Both schemes are evaluated using unsupervised LM adaptation on a Mandarin broadcast transcription task. Consistent gains in perplexity using context dependent, rather than global, weights are observed as well as reductions in character error rate.
doi:10.21437/interspeech.2008-254 fatcat:uaireizlhbffxovy2mndfuyk6m