A realistic dynamic facial expression transfer method

科研成果: 期刊稿件文章同行评审

11 引用 (Scopus)

摘要

We present a novel approach for synthesizing the dynamic facial expressions of the source subject and transferring the dynamic expression to the target subject. The synthesized animation of the target subject preserves both the facial appearance of the target subject and expression deformation of the source subject. We use active appearance model to separate and align the shapes and texture of the multi-expression facial images. The dynamic facial expressions of the source subject are obtained by the nonlinear TensorFace trained on a small sample size. Through interpolating the aligned sequential shapes of different expressions, we obtain the smooth shape variations under different expressions, according to which we warp the neutral faces to other expressions. However, the warped expressions are missing of the expression details. We transfer the facial detail obtained by nonlinear TensorFace to the warped dynamic expression faces with the proposed strategy. Experiments on the extended Cohn-Kanade (CK+) facial expression database show that our results have higher perceptual quality than state-of-the-art methods.

源语言英语
页(从-至)21-29
页数9
期刊Neurocomputing
89
DOI
出版状态已出版 - 15 7月 2012

指纹

探究 'A realistic dynamic facial expression transfer method' 的科研主题。它们共同构成独一无二的指纹。

引用此