A coupled HMM approach to video-realistic speech animation

Lei Xie, Zhi Qiang Liu

Research output: Contribution to journalArticlepeer-review

67 Scopus citations

Abstract

We propose a coupled hidden Markov model (CHMM) approach to video-realistic speech animation, which realizes realistic facial animations driven by speaker independent continuous speech. Different from hidden Markov model (HMM)-based animation approaches that use a single-state chain, we use CHMMs to explicitly model the subtle characteristics of audio-visual speech, e.g., the asynchrony, temporal dependency (synchrony), and different speech classes between the two modalities. We derive an expectation maximization (EM)-based A/V conversion algorithm for the CHMMs, which converts acoustic speech into decent facial animation parameters. We also present a video-realistic speech animation system. The system transforms the facial animation parameters to a mouth animation sequence, refines the animation with a performance refinement process, and finally stitches the animated mouth with a background facial sequence seamlessly. We have compared the animation performance of the CHMM with the HMMs, the multi-stream HMMs and the factorial HMMs both objectively and subjectively. Results show that the CHMMs achieve superior animation performance. The ph-vi-CHMM system, which adopts different state variables (phoneme states and viseme states) in the audio and visual modalities, performs the best. The proposed approach indicates that explicitly modelling audio-visual speech is promising for speech animation.

Original languageEnglish
Pages (from-to)2325-2340
Number of pages16
JournalPattern Recognition
Volume40
Issue number8
DOIs
StatePublished - Aug 2007
Externally publishedYes

Keywords

  • Audio-to-visual conversion
  • Coupled hidden Markov models (CHMMs)
  • Facial animation
  • Speech animation
  • Talking faces

Fingerprint

Dive into the research topics of 'A coupled HMM approach to video-realistic speech animation'. Together they form a unique fingerprint.

Cite this