Time Domain Audio Visual Speech Separation

Jian Wu, Yong Xu, Shi Xiong Zhang, Lian Wu Chen, Meng Yu, Lei Xie, Dong Yu

科研成果: 书/报告/会议事项章节会议稿件同行评审

98 引用 (Scopus)

摘要

Audio-visual multi-modal modeling has been demonstrated to be effective in many speech related tasks, such as speech recognition and speech enhancement. This paper introduces a new time-domain audio-visual architecture for target speaker extraction from monaural mixtures. The architecture generalizes the previous TasNet (time-domain speech separation network) to enable multi-modal learning and at meanwhile it extends the classical audio-visual speech separation from frequency-domain to time-domain. The main components of proposed architecture include an audio encoder, a video encoder that extracts lip embedding from video streams, a multi-modal separation network and an audio decoder. Experiments on simulated mixtures based on recently released LRS2 dataset show that our method can bring 3dB+ and 4dB+ Si-SNR improvements on two-and three-speaker cases respectively, compared to audio-only TasNet and frequency-domain audio-visual networks.

源语言英语
主期刊名2019 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019 - Proceedings
出版商Institute of Electrical and Electronics Engineers Inc.
667-673
页数7
ISBN(电子版)9781728103068
DOI
出版状态已出版 - 12月 2019
活动2019 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019 - Singapore, 新加坡
期限: 15 12月 201918 12月 2019

出版系列

姓名2019 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019 - Proceedings

会议

会议2019 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019
国家/地区新加坡
Singapore
时期15/12/1918/12/19

指纹

探究 'Time Domain Audio Visual Speech Separation' 的科研主题。它们共同构成独一无二的指纹。

引用此