Fine-Scale Face Fitting and Texture Fusion With Inverse Renderer

Yang Liu, Yangyu Fan, Zhe Guo, Anam Zaman, Shiya Liu

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

3D face reconstruction from a single image still suffers from low accuracy and inability to recover textures in invisible regions. In this paper, we propose a method for generating a 3D portrait with complete texture. The coarse face-and-head model and texture parameters are obtained using 3D Morphable Model fitting. We design an image-geometric inverse renderer that acquires normal, albedo, and light to jointly reconstruct the facial details. Then, we use a texture fusion network to extract the valid texture from rendered faces employing different viewpoints. Specifically, this fused texture recovers the invisible region of the input face, which illustrates the realistic surface of our 3D geometric model. Our approach faithfully reconstructs the original face details, including the profiles and the head region. Extensive experiments are performed to demonstrate that our method outperforms state-of-the-art techniques in various challenging scenarios.

Original languageEnglish
Pages (from-to)26-30
Number of pages5
JournalIEEE Signal Processing Letters
Volume30
DOIs
StatePublished - 2023

Keywords

  • 3D face reconstruction
  • 3D morphable model
  • face texture completion
  • face-and-head model
  • renderer

Fingerprint

Dive into the research topics of 'Fine-Scale Face Fitting and Texture Fusion With Inverse Renderer'. Together they form a unique fingerprint.

Cite this