TY - GEN
T1 - The NPU-MSXF Speech-to-Speech Translation System for IWSLT 2023 Speech-to-Speech Translation Task
AU - Song, Kun
AU - Lei, Yi
AU - Chen, Peikun
AU - Cao, Yiqing
AU - Wei, Kun
AU - Zhang, Yongmao
AU - Xie, Lei
AU - Jiang, Ning
AU - Zhao, Guoqing
N1 - Publisher Copyright:
© IWSLT 2023.All rights reserved.
PY - 2023
Y1 - 2023
N2 - This paper describes the NPU-MSXF system for the IWSLT 2023 speech-to-speech translation (S2ST) task which aims to translate from English speech of multi-source to Chinese speech. The system is built in a cascaded manner consisting of automatic speech recognition (ASR), machine translation (MT), and text-to-speech (TTS). We make tremendous efforts to handle the challenging multi-source input. Specifically, to improve the robustness to multi-source speech input, we adopt various data augmentation strategies and a ROVER-based score fusion on multiple ASR model outputs. To better handle the noisy ASR transcripts, we introduce a three-stage fine-tuning strategy to improve translation accuracy. Finally, we build a TTS model with high naturalness and sound quality, which leverages a two-stage framework, using network bottleneck features as a robust intermediate representation for speaker timbre and linguistic content disentanglement. Based on the two-stage framework, pre-trained speaker embedding is leveraged as a condition to transfer the speaker timbre in the source English speech to the translated Chinese speech. Experimental results show that our system has high translation accuracy, speech naturalness, sound quality, and speaker similarity. Moreover, it shows good robustness to multi-source data.
AB - This paper describes the NPU-MSXF system for the IWSLT 2023 speech-to-speech translation (S2ST) task which aims to translate from English speech of multi-source to Chinese speech. The system is built in a cascaded manner consisting of automatic speech recognition (ASR), machine translation (MT), and text-to-speech (TTS). We make tremendous efforts to handle the challenging multi-source input. Specifically, to improve the robustness to multi-source speech input, we adopt various data augmentation strategies and a ROVER-based score fusion on multiple ASR model outputs. To better handle the noisy ASR transcripts, we introduce a three-stage fine-tuning strategy to improve translation accuracy. Finally, we build a TTS model with high naturalness and sound quality, which leverages a two-stage framework, using network bottleneck features as a robust intermediate representation for speaker timbre and linguistic content disentanglement. Based on the two-stage framework, pre-trained speaker embedding is leveraged as a condition to transfer the speaker timbre in the source English speech to the translated Chinese speech. Experimental results show that our system has high translation accuracy, speech naturalness, sound quality, and speaker similarity. Moreover, it shows good robustness to multi-source data.
UR - http://www.scopus.com/inward/record.url?scp=85174932812&partnerID=8YFLogxK
M3 - 会议稿件
AN - SCOPUS:85174932812
T3 - 20th International Conference on Spoken Language Translation, IWSLT 2023 - Proceedings of the Conference
SP - 311
EP - 320
BT - 20th International Conference on Spoken Language Translation, IWSLT 2023 - Proceedings of the Conference
A2 - Salesky, Elizabeth
A2 - Federico, Marcello
A2 - Carpuat, Marine
PB - Association for Computational Linguistics
T2 - 20th International Conference on Spoken Language Translation, IWSLT 2023
Y2 - 13 July 2023 through 14 July 2023
ER -