BLSTM neural networks for speech driven head motion synthesis

Chuang Ding, Pengcheng Zhu, Lei Xie

Research output: Contribution to journalConference articlepeer-review

18 Scopus citations

Abstract

Head motion naturally occurs in synchrony with speech and carries important intention, attitude and emotion factors. This paper aims to synthesize head motions from natural speech for talking avatar applications. Specifically, we study the feasibility of learning speech-to-head-motion regression models by two types of popular neural networks, i.e., feed-forward and bidirectional long short-term memory (BLSTM). We discover that the BLSTM networks apparently outperform the feedforward ones in this task because of their capacity of learning long-range speech dynamics. More interestingly, we observe that stacking different networks, i.e., inserting a feed-forward layer into two BLSTM layers, achieves the best performance. Subjective evaluation shows that this hybrid network can produce more plausible head motions from speech.

Original languageEnglish
Pages (from-to)3345-3349
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2015-January
DOIs
StatePublished - 2015
Event16th Annual Conference of the International Speech Communication Association, INTERSPEECH 2015 - Dresden, Germany
Duration: 6 Sep 201510 Sep 2015

Keywords

  • BLSTM
  • Head motion synthesis
  • Neural networks
  • Talking avatar

Fingerprint

Dive into the research topics of 'BLSTM neural networks for speech driven head motion synthesis'. Together they form a unique fingerprint.

Cite this