Speech-driven head motion synthesis using neural networks

Chuang Ding, Pengcheng Zhu, Lei Xie, Dongmei Jiang, Zhonghua Fu

Research output: Contribution to journalConference articlepeer-review

8 Scopus citations

Abstract

This paper presents a neural network approach for speech-driven head motion synthesis, which can automatically predict a speaker's head movement from his/her speech. Specifically, we realize speech-to-head-motion mapping by learning a multi-layer perceptron from audio-visual broadcast news data. First, we show that a generatively pre-trained neural network significantly outperforms a randomly initialized network and the hidden Markov model (HMM) approach. Second, we demonstrate that the feature combination of log Mel-scale filter-bank (FBank), energy and fundamental frequency (F0) performs best in head motion prediction. Third, we discover that using long context acoustic information can further improve the performance. Finally, extra unlabeled training data used in the pre-training stage can achieve more performance gain. The proposed speech-driven head motion synthesis approach increases the CCA from 0.299 (the HMM approach) to 0.565 and it can be effectively used in expressive talking avatar animation.

Keywords

  • Deep neural network
  • Head motion synthesis
  • Neural network
  • Talking avatar

Fingerprint

Dive into the research topics of 'Speech-driven head motion synthesis using neural networks'. Together they form a unique fingerprint.

Cite this