A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is
This paper investigated two end-to-end approaches for the identification of spoken language from webcast sources. Long short-term memory (LSTM) and self-attention mechanism architectures are adopted and compared against a deep convolution network baseline model. These methods focused on the performance of spoken language identification (LID) on variable length utterance. The dataset used for experimental evaluation contains five language data collected from webcast (Webcast-5) and ten Chinesedoi:10.12783/dtcse/iteee2019/28737 fatcat:eidczhiwpffzlcqabz2hcxe7ve