A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2011; you can also visit the original URL.
The file type is application/pdf
.
Scalable HMM based inference engine in large vocabulary continuous speech recognition
2009
2009 IEEE International Conference on Multimedia and Expo
Parallel scalability allows an application to efficiently utilize an increasing number of processing elements. In this paper we explore a design space for application scalability for an inference engine in large vocabulary continuous speech recognition (LVCSR). Our implementation of the inference engine involves a parallel graph traversal through an irregular graph-based knowledge network with millions of states and arcs. The challenge is not only to define a software architecture that exposes
doi:10.1109/icme.2009.5202871
dblp:conf/icmcs/ChongYYGHSK09
fatcat:f7xpdimcwbam3nl2atjdfktbd4