Improving deep neural networks for LVCSR using dropout and shrinking structure

Shiliang Zhang, Yebo Bao, Pan Zhou, Hui Jiang, Lirong Dai
2014 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
Recently, the hybrid deep neural networks and hidden Markov models (DNN/HMMs) have achieved dramatic gains over the conventional GMM/HMMs method on various large vocabulary continuous speech recognition (LVCSR) tasks. In this paper, we propose two new methods to further improve the hybrid DNN/HMMs model: i) use dropout as pre-conditioner (DAP) to initialize DNN prior to back-propagation (BP) for better recognition accuracy; ii) employ a shrinking DNN structure (sDNN) with hidden layers
more » ... g in size from bottom to top for the purpose of reducing model size and expediting computation time. The proposed DAP method is evaluated in a 70-hour Mandarin transcription (PSC) task and the 309-hour Switchboard (SWB) task. Compared with the traditional greedy layer-wise pre-trained DNN, it can achieve about 10% and 6.8% relative recognition error reduction for PSC and SWB tasks respectively. In addition, we also evaluate sDNN as well as its combination with DAP on the SWB task. Experimental results show that these methods can reduce model size to 45% of original size and accelerate training and test time by 55%, without losing recognition accuracy. Index Termsdropout, dropout as pre-conditioner (DAP), shrinking hidden layer, deep neural networks, LVCSR, DNN-HMM
doi:10.1109/icassp.2014.6854927 dblp:conf/icassp/ZhangBZ0D14 fatcat:ef3zuavnnbezrds7r6r3ppgyoe