Head motion synthesis from speech using deep neural networks

Chuang Ding, Lei Xie, Pengcheng Zhu

科研成果: 期刊稿件文章同行评审

43 引用 (Scopus)

摘要

This paper presents a deep neural network (DNN) approach for head motion synthesis, which can automatically predict head movement of a speaker from his/her speech. Specifically, we realize speech-to-head-motion mapping by learning a DNN from audio-visual broadcast news data. We first show that a generatively pre-trained neural network significantly outperforms a conventional randomly initialized network. We then demonstrate that filter bank (FBank) features outperform mel frequency cepstral coefficients (MFCC) and linear prediction coefficients (LPC) in head motion prediction. Finally, we discover that extra training data from other speakers used in the pre-training stage can improve the head motion prediction performance of a target speaker. Our promising results in speech-to-head-motion prediction can be used in talking avatar animation.

源语言英语
页(从-至)9871-9888
页数18
期刊Multimedia Tools and Applications
74
22
DOI
出版状态已出版 - 24 7月 2014

指纹

探究 'Head motion synthesis from speech using deep neural networks' 的科研主题。它们共同构成独一无二的指纹。

引用此