A realistic dynamic facial expression transfer method

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

We present a novel approach for synthesizing the dynamic facial expressions of the source subject and transferring the dynamic expression to the target subject. The synthesized animation of the target subject preserves both the facial appearance of the target subject and expression deformation of the source subject. We use active appearance model to separate and align the shapes and texture of the multi-expression facial images. The dynamic facial expressions of the source subject are obtained by the nonlinear TensorFace trained on a small sample size. Through interpolating the aligned sequential shapes of different expressions, we obtain the smooth shape variations under different expressions, according to which we warp the neutral faces to other expressions. However, the warped expressions are missing of the expression details. We transfer the facial detail obtained by nonlinear TensorFace to the warped dynamic expression faces with the proposed strategy. Experiments on the extended Cohn-Kanade (CK+) facial expression database show that our results have higher perceptual quality than state-of-the-art methods.

Original languageEnglish
Pages (from-to)21-29
Number of pages9
JournalNeurocomputing
Volume89
DOIs
StatePublished - 15 Jul 2012

Keywords

  • Dynamic expression
  • Expression manifold
  • Expression synthesis
  • Nonlinear tensor decomposition

Fingerprint

Dive into the research topics of 'A realistic dynamic facial expression transfer method'. Together they form a unique fingerprint.

Cite this