Glow-WaveGAN 2: High-quality Zero-shot Text-to-speech Synthesis and Any-to-any Voice Conversion

Yi Lei, Shan Yang, Jian Cong, Lei Xie, Dan Su

科研成果: 期刊稿件会议文章同行评审

10 引用 (Scopus)

摘要

The zero-shot scenario for speech generation aims at synthesizing a novel unseen voice with only one utterance of the target speaker. Although the challenges of adapting new voices in zero-shot scenario exist in both stages - acoustic modeling and vocoder, previous works usually consider the problem from only one stage. In this paper, we extend our previous Glow-WaveGAN to Glow-WaveGAN 2, aiming to solve the problem from both stages for high-quality zero-shot text-to-speech and any-to-any voice conversion. We first build a universal WaveGAN model for extracting latent distribution p(z) of speech and reconstructing waveform from it. Then a flow-based acoustic model only needs to learn the same p(z) from texts, which naturally avoids the mismatch between the acoustic model and the vocoder, resulting in high-quality generated speech without model fine-tuning. Based on a continuous speaker space and the reversible property of flows, the conditional distribution can be obtained for any speaker, and thus we can further conduct high-quality zero-shot speech generation for new speakers. We particularly investigate two methods to construct the speaker space, namely pre-trained speaker encoder and jointly-trained speaker encoder. The superiority of Glow-WaveGAN 2 has been proved through TTS and VC experiments conducted on LibriTTS corpus and VTCK corpus.

源语言英语
页(从-至)2563-2567
页数5
期刊Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
2022-September
DOI
出版状态已出版 - 2022
活动23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022 - Incheon, 韩国
期限: 18 9月 202222 9月 2022

指纹

探究 'Glow-WaveGAN 2: High-quality Zero-shot Text-to-speech Synthesis and Any-to-any Voice Conversion' 的科研主题。它们共同构成独一无二的指纹。

引用此