Cross-Lingual Voice Conversion with Disentangled Universal Linguistic Representations

Zhenchuan Yang, Weibin Zhang, Yufei Liu, Xiaofen Xing
2021 Conference of the International Speech Communication Association  
Intra-lingual voice conversion has achieved great progress recently in terms of naturalness and similarity. However, in crosslingual voice conversion, there is still an urgent need to improve the quality of the converted speech, especially with nonparallel training data. Previous works usually use Phonetic Posteriorgrams (PPGs) as the linguistic representations. In the case of cross-lingual voice conversion, the linguistic information is therefore represented as PPGs. It is well-known that PPGs
more » ... may suffer from word dropping and mispronunciation, especially when the input speech is noisy. In addition, systems using PPGs can only convert the input into a known target language that is seen during training. This paper proposes an any-to-many voice conversion system based on disentangled universal linguistic representations (ULRs), which are extracted from a mix-lingual phoneme recognition system. Two methods are proposed to remove speaker information from ULRs. Experimental results show that the proposed method can effectively improve the converted speech objectively and subjectively. The system can also convert speech utterances naturally even if the language is not seen during training.
doi:10.21437/interspeech.2021-552 dblp:conf/interspeech/YangZLX21 fatcat:jdpsjnuimzgwjptaah2o7bkh4i