Information Sieve: Content Leakage Reduction in End-to-End Prosody Transfer for Expressive Speech Synthesis

Xudong Dai, Cheng Gong, Longbiao Wang, Kaili Zhang
2021 Conference of the International Speech Communication Association  
Expressive neural text-to-speech (TTS) systems incorporate a style encoder to learn a latent embedding as the style information. However, this embedding process may encode redundant textual information. This phenomenon is called content leakage. Researchers have attempted to resolve this problem by adding an ASR or other auxiliary supervision loss functions. In this study, we propose an unsupervised method called the "information sieve" to reduce the effect of content leakage in prosody
more » ... . The rationale of this approach is that the style encoder can be forced to focus on style information rather than on textual information contained in the reference speech by a well-designed downsample-upsample filter, i.e., the extracted style embeddings can be downsampled at a certain interval and then upsampled by duplication. Furthermore, we used instance normalization in convolution layers to help the system learn a better latent style space. Objective metrics such as the significantly lower word error rate (WER) demonstrate the effectiveness of this model in mitigating content leakage . Listening tests indicate that the model retains its prosody transferability compared with the baseline models such as the original GST-Tacotron and ASR-guided Tacotron.
doi:10.21437/interspeech.2021-1011 dblp:conf/interspeech/DaiGWZ21 fatcat:aopxka7kwbbclbmutqjzpds4hq