Multi-speaker Multi-style Text-to-speech Synthesis with Single-speaker Single-style Training Data Scenarios

Qicong Xie, Tao Li, Xinsheng Wang, Zhichao Wang, Lei Xie, Guoqiao Yu, Guanglu Wan

科研成果: 书/报告/会议事项章节会议稿件同行评审

2 引用 (Scopus)

摘要

In the existing cross-speaker style transfer task, a source speaker with multi-style recordings is necessary to provide the style for a target speaker. However, it is hard for one speaker to express all expected styles. In this paper, a more general task, which is to produce expressive speech by combining any styles and timbres from a multi-speaker corpus in which each speaker has a unique style, is proposed. To realize this task, a novel method is proposed. This method is a Tacotron2-based framework but with a fine-grained text-based prosody predicting module and a speaker identity controller. Experiments demonstrate that the proposed method can successfully express a style of one speaker with the timber of another speaker bypassing the dependency on a single speaker's multi-style corpus. Moreover, the explicit prosody features used in the prosody predicting module can increase the diversity of synthetic speech by adjusting the value of prosody features.

源语言英语
主期刊名2022 13th International Symposium on Chinese Spoken Language Processing, ISCSLP 2022
编辑Kong Aik Lee, Hung-yi Lee, Yanfeng Lu, Minghui Dong
出版商Institute of Electrical and Electronics Engineers Inc.
66-70
页数5
ISBN(电子版)9798350397963
DOI
出版状态已出版 - 2022
活动13th International Symposium on Chinese Spoken Language Processing, ISCSLP 2022 - Singapore, 新加坡
期限: 11 12月 202214 12月 2022

出版系列

姓名2022 13th International Symposium on Chinese Spoken Language Processing, ISCSLP 2022

会议

会议13th International Symposium on Chinese Spoken Language Processing, ISCSLP 2022
国家/地区新加坡
Singapore
时期11/12/2214/12/22

指纹

探究 'Multi-speaker Multi-style Text-to-speech Synthesis with Single-speaker Single-style Training Data Scenarios' 的科研主题。它们共同构成独一无二的指纹。

引用此