A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Augmenting Images for ASR and TTS Through Single-Loop and Dual-Loop Multimodal Chain Framework
2020
Interspeech 2020
Previous research has proposed a machine speech chain to enable automatic speech recognition (ASR) and text-to-speech synthesis (TTS) to assist each other in semi-supervised learning and to avoid the need for a large amount of paired speech and text data. However, that framework still requires a large amount of unpaired (speech or text) data. A prototype multimodal machine chain was then explored to further reduce the need for a large amount of unpaired data, which could improve ASR or TTS even
doi:10.21437/interspeech.2020-2001
dblp:conf/interspeech/EffendiTS020
fatcat:opiefbp6hbfwtllcuyj5p2ny4q