Speech-driven head motion synthesis using neural networks

Chuang Ding, Pengcheng Zhu, Lei Xie, Dongmei Jiang, Zhonghua Fu

科研成果: 期刊稿件会议文章同行评审

8 引用 (Scopus)

摘要

This paper presents a neural network approach for speech-driven head motion synthesis, which can automatically predict a speaker's head movement from his/her speech. Specifically, we realize speech-to-head-motion mapping by learning a multi-layer perceptron from audio-visual broadcast news data. First, we show that a generatively pre-trained neural network significantly outperforms a randomly initialized network and the hidden Markov model (HMM) approach. Second, we demonstrate that the feature combination of log Mel-scale filter-bank (FBank), energy and fundamental frequency (F0) performs best in head motion prediction. Third, we discover that using long context acoustic information can further improve the performance. Finally, extra unlabeled training data used in the pre-training stage can achieve more performance gain. The proposed speech-driven head motion synthesis approach increases the CCA from 0.299 (the HMM approach) to 0.565 and it can be effectively used in expressive talking avatar animation.

指纹

探究 'Speech-driven head motion synthesis using neural networks' 的科研主题。它们共同构成独一无二的指纹。

引用此