Dynamic Sparsity Neural Networks for Automatic Speech Recognition [article]

Zhaofeng Wu, Ding Zhao, Qiao Liang, Jiahui Yu, Anmol Gulati, Ruoming Pang
2021 arXiv   pre-print
In automatic speech recognition (ASR), model pruning is a widely adopted technique that reduces model size and latency to deploy neural network models on edge devices with resource constraints. However, multiple models with different sparsity levels usually need to be separately trained and deployed to heterogeneous target hardware with different resource specifications and for applications that have various latency requirements. In this paper, we present Dynamic Sparsity Neural Networks (DSNN)
more » ... that, once trained, can instantly switch to any predefined sparsity configuration at run-time. We demonstrate the effectiveness and flexibility of DSNN using experiments on internal production datasets with Google Voice Search data, and show that the performance of a DSNN model is on par with that of individually trained single sparsity networks. Our trained DSNN model, therefore, can greatly ease the training process and simplify deployment in diverse scenarios with resource constraints.
arXiv:2005.10627v3 fatcat:hitxhjskz5chxeztappxiwlmmq