Viseme-dependent weight optimization for CHMM-based audio-visual speech recognition

Alexey Karpov, Andrey Ronzhin, Konstantin Markov, Miloš Železný
2010 Interspeech 2010   unpublished
The aim of the present study is to investigate some key challenges of the audio-visual speech recognition technology, such as asynchrony modeling of multimodal speech, estimation of auditory and visual speech significance, as well as stream weight optimization. Our research shows that the use of viseme-dependent significance weights improves the performance of state asynchronous CHMM-based speech recognizer. In addition, for a state synchronous MSHMMbased recognizer, fewer errors can be
more » ... using stationary time delays of visual data with respect to the corresponding audio signal. Evaluation experiments showed that individual audio-visual stream weights for each visemephoneme pair lead to relative reduction of WER by 20%.
doi:10.21437/interspeech.2010-710 fatcat:tgeqx4u6yzebnixfhofmozlosu