A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Context Dependent RNNLM for Automatic Transcription of Conversations
2020
Interspeech 2020
Conversational speech, while being unstructured at an utterance level, typically has a macro topic which provides larger context spanning multiple utterances. The current language models in speech recognition systems using recurrent neural networks (RNNLM) rely mainly on the local context and exclude the larger context. In order to model the long term dependencies of words across multiple sentences, we propose a novel architecture where the words from prior utterances are converted to an
doi:10.21437/interspeech.2020-1813
dblp:conf/interspeech/ChetupalliG20
fatcat:exd3xat3xbhjzi6of6y4ipgqna