A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Transfer Learning for Speech Recognition on a Budget
2017
Proceedings of the 2nd Workshop on Representation Learning for NLP
End-to-end training of automated speech recognition (ASR) systems requires massive data and compute resources. We explore transfer learning based on model adaptation as an approach for training ASR models under constrained GPU memory, throughput and training data. We conduct several systematic experiments adapting a Wav2Letter convolutional neural network originally trained for English ASR to the German language. We show that this technique allows faster training on consumer-grade resources
doi:10.18653/v1/w17-2620
dblp:conf/rep4nlp/KunzeKKKJS17
fatcat:rmxbtr64evcq7jmzpnwnv5j7n4