Head motion synthesis from speech using deep neural networks

Chuang Ding, Lei Xie, Pengcheng Zhu

Research output: Contribution to journalArticlepeer-review

43 Scopus citations

Abstract

This paper presents a deep neural network (DNN) approach for head motion synthesis, which can automatically predict head movement of a speaker from his/her speech. Specifically, we realize speech-to-head-motion mapping by learning a DNN from audio-visual broadcast news data. We first show that a generatively pre-trained neural network significantly outperforms a conventional randomly initialized network. We then demonstrate that filter bank (FBank) features outperform mel frequency cepstral coefficients (MFCC) and linear prediction coefficients (LPC) in head motion prediction. Finally, we discover that extra training data from other speakers used in the pre-training stage can improve the head motion prediction performance of a target speaker. Our promising results in speech-to-head-motion prediction can be used in talking avatar animation.

Original languageEnglish
Pages (from-to)9871-9888
Number of pages18
JournalMultimedia Tools and Applications
Volume74
Issue number22
DOIs
StatePublished - 24 Jul 2014

Keywords

  • Computer animation
  • Deep neural network
  • Head motion synthesis
  • Talking avatar

Fingerprint

Dive into the research topics of 'Head motion synthesis from speech using deep neural networks'. Together they form a unique fingerprint.

Cite this