Accent and Speaker Disentanglement in Many-to-many Voice Conversion

Zhichao Wang, Wenshuo Ge, Xiong Wang, Shan Yang, Wendong Gan, Haitao Chen, Hai Li, Lei Xie, Xiulin Li

科研成果: 书/报告/会议事项章节会议稿件同行评审

28 引用 (Scopus)

摘要

This paper proposes an interesting voice and accent joint conversion approach, which can convert an arbitrary source speaker's voice to a target speaker with non-native accent. This problem is challenging as each target speaker only has training data in native accent and we need to disentangle accent and speaker information in the conversion model training and re-combine them in the conversion stage. In our recognition-synthesis conversion framework, we manage to solve this problem by two proposed tricks. First, we use accent-dependent speech recognizers to obtain bottleneck features for different accented speakers. This aims to wipe out other factors beyond the linguistic information in the BN features for conversion model training. Second, we propose to use adversarial training to better disentangle the speaker and accent information in our encoder-decoder based conversion model. Specifically, we plug an auxiliary speaker classifier to the encoder, trained with an adversarial loss to wipe out speaker information from the encoder output. Experiments show that our approach is superior to the baseline. The proposed tricks are quite effective in improving accentedness and audio quality and speaker similarity are well maintained.

源语言英语
主期刊名2021 12th International Symposium on Chinese Spoken Language Processing, ISCSLP 2021
出版商Institute of Electrical and Electronics Engineers Inc.
ISBN(电子版)9781728169941
DOI
出版状态已出版 - 24 1月 2021
活动12th International Symposium on Chinese Spoken Language Processing, ISCSLP 2021 - Hong Kong, 香港
期限: 24 1月 202127 1月 2021

出版系列

姓名2021 12th International Symposium on Chinese Spoken Language Processing, ISCSLP 2021

会议

会议12th International Symposium on Chinese Spoken Language Processing, ISCSLP 2021
国家/地区香港
Hong Kong
时期24/01/2127/01/21

指纹

探究 'Accent and Speaker Disentanglement in Many-to-many Voice Conversion' 的科研主题。它们共同构成独一无二的指纹。

引用此