Filters








3 Hits in 2.6 sec

Variable-length acoustic units inference for text-to-speech synthesis

Olivier Boeffard
2001 7th European Conference on Speech Communication and Technology (Eurospeech 2001)   unpublished
This technique is widely used to infer hidden Markov random processes.  ...  Indeed, let us postulate a large enough number of observed sequences and an ergodic source process. The entropy calculated through the model is denoted À ½ Ç ÐÓ ¾ È ´Çµ.  ... 
doi:10.21437/eurospeech.2001-261 fatcat:jsb4buacobghngak2pzopokaxe

Language modeling by variable length sequences: theoretical formulation and evaluation of multigrams

S. Deligne, F. Bimbot
1995 International Conference on Acoustics, Speech, and Signal Processing  
We show that estimates of the model parameters can be computed through an iterative Expectation-Maximization algorithm and we describe a forward-backward procedure for its implementation.  ...  We report the results of a systematical evaluation of multigrams for language modeling on the ATIS database. The objective performance measure is the test set perplexity.  ...  To a certain extent, a n-multigram model can be thought of as a n-state Ergodic Hidden Markov Model (EHMM) with state i emitting a sequence of length i, and all transition probabilities being equal.  ... 
doi:10.1109/icassp.1995.479391 dblp:conf/icassp/DeligneB95 fatcat:opbdpt4etngstga4qnfr42ccqq

How Do Neural Sequence Models Generalize? Local and Global Cues for Out-of-Distribution Prediction

D. Anthony Bau, Jacob Andreas
2021 Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing   unpublished
Ergodic hidden Markov models and polygrams for language mod- Alec Radford, Jeffrey Wu, Rewon Child, David Luan, eling. In Proceedings of ICASSP’94.  ...  can be reli- schemes in controlling out-of-distribution be- ably estimated (e.g. just the final word; Katz 1987), havior. and hidden Markov models explicitly integrate  ... 
doi:10.18653/v1/2021.emnlp-main.448 fatcat:5ecowb2cvrfvfij2ijvw72wev4