MaskRecon: High-quality human reconstruction via masked autoencoders using a single RGB-D image

Xing Li, Yangyu Fan, Zhe Guo, Zhibo Rao, Yu Duan, Shiya Liu

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we explore reconstructing high-quality clothed 3D humans from a single RGB-D image, assuming that virtual humans can be represented by front-view and back-view depths. Due to the scarcity of captured real RGB-D human images, we employ rendered images to train our method. However, rendered images lack background with significant depth variation in silhouettes, leading to shape prediction inaccuracies and noise. To mitigate this issue, we introduce a pseudo-multi-task framework, which incorporates a Conditional Generative Adversarial Network (CGAN) to infer back-view RGB-D images and a self-supervised Masked Autoencoder (MAE) to capture latent structural information of the human body. Additionally, we propose a Multi-scale Feature Fusion (MFF) module to effectively merge structural information and conditional features at various scales. Our method surpasses many existing techniques, as demonstrated through evaluations on the Thuman, RenderPeople, and BUFF datasets. Notably, our approach excels in reconstructing high-quality human models, even under challenging conditions such as complex poses and loose clothing, both on rendered and real-world images. Codes are available at https://github.com/Archaic-Atom/MaskRecon.

Original languageEnglish
Article number128487
JournalNeurocomputing
Volume609
DOIs
StatePublished - 7 Dec 2024

Keywords

  • Masked autoencoder
  • Multi-scale feature fusion
  • Pseudo-multi-task framework
  • Reconstruct clothed 3D human
  • Single RGB-D image

Fingerprint

Dive into the research topics of 'MaskRecon: High-quality human reconstruction via masked autoencoders using a single RGB-D image'. Together they form a unique fingerprint.

Cite this