TY - GEN
T1 - Accent and Speaker Disentanglement in Many-to-many Voice Conversion
AU - Wang, Zhichao
AU - Ge, Wenshuo
AU - Wang, Xiong
AU - Yang, Shan
AU - Gan, Wendong
AU - Chen, Haitao
AU - Li, Hai
AU - Xie, Lei
AU - Li, Xiulin
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/1/24
Y1 - 2021/1/24
N2 - This paper proposes an interesting voice and accent joint conversion approach, which can convert an arbitrary source speaker's voice to a target speaker with non-native accent. This problem is challenging as each target speaker only has training data in native accent and we need to disentangle accent and speaker information in the conversion model training and re-combine them in the conversion stage. In our recognition-synthesis conversion framework, we manage to solve this problem by two proposed tricks. First, we use accent-dependent speech recognizers to obtain bottleneck features for different accented speakers. This aims to wipe out other factors beyond the linguistic information in the BN features for conversion model training. Second, we propose to use adversarial training to better disentangle the speaker and accent information in our encoder-decoder based conversion model. Specifically, we plug an auxiliary speaker classifier to the encoder, trained with an adversarial loss to wipe out speaker information from the encoder output. Experiments show that our approach is superior to the baseline. The proposed tricks are quite effective in improving accentedness and audio quality and speaker similarity are well maintained.
AB - This paper proposes an interesting voice and accent joint conversion approach, which can convert an arbitrary source speaker's voice to a target speaker with non-native accent. This problem is challenging as each target speaker only has training data in native accent and we need to disentangle accent and speaker information in the conversion model training and re-combine them in the conversion stage. In our recognition-synthesis conversion framework, we manage to solve this problem by two proposed tricks. First, we use accent-dependent speech recognizers to obtain bottleneck features for different accented speakers. This aims to wipe out other factors beyond the linguistic information in the BN features for conversion model training. Second, we propose to use adversarial training to better disentangle the speaker and accent information in our encoder-decoder based conversion model. Specifically, we plug an auxiliary speaker classifier to the encoder, trained with an adversarial loss to wipe out speaker information from the encoder output. Experiments show that our approach is superior to the baseline. The proposed tricks are quite effective in improving accentedness and audio quality and speaker similarity are well maintained.
KW - accent conversion
KW - adversarial learning
KW - voice conversion
UR - http://www.scopus.com/inward/record.url?scp=85102557039&partnerID=8YFLogxK
U2 - 10.1109/ISCSLP49672.2021.9362120
DO - 10.1109/ISCSLP49672.2021.9362120
M3 - 会议稿件
AN - SCOPUS:85102557039
T3 - 2021 12th International Symposium on Chinese Spoken Language Processing, ISCSLP 2021
BT - 2021 12th International Symposium on Chinese Spoken Language Processing, ISCSLP 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 12th International Symposium on Chinese Spoken Language Processing, ISCSLP 2021
Y2 - 24 January 2021 through 27 January 2021
ER -