A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Cross-lingual Self-Supervised Speech Representations for Improved Dysarthric Speech Recognition
[article]
2022
arXiv
pre-print
State-of-the-art automatic speech recognition (ASR) systems perform well on healthy speech. However, the performance on impaired speech still remains an issue. The current study explores the usefulness of using Wav2Vec self-supervised speech representations as features for training an ASR system for dysarthric speech. Dysarthric speech recognition is particularly difficult as several aspects of speech such as articulation, prosody and phonation can be impaired. Specifically, we train an
arXiv:2204.01670v1
fatcat:prnagcdntvcwnmvsvvdskgve4i