A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Improving deep neural networks for LVCSR using dropout and shrinking structure
2014
2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Recently, the hybrid deep neural networks and hidden Markov models (DNN/HMMs) have achieved dramatic gains over the conventional GMM/HMMs method on various large vocabulary continuous speech recognition (LVCSR) tasks. In this paper, we propose two new methods to further improve the hybrid DNN/HMMs model: i) use dropout as pre-conditioner (DAP) to initialize DNN prior to back-propagation (BP) for better recognition accuracy; ii) employ a shrinking DNN structure (sDNN) with hidden layers
doi:10.1109/icassp.2014.6854927
dblp:conf/icassp/ZhangBZ0D14
fatcat:ef3zuavnnbezrds7r6r3ppgyoe