SFNet: Clothed Human 3D Reconstruction via Single Side-To-Front View RGB-D Image

Xing Li, Yangyu Fan, Di Xu, Wenqing He, Guoyun Lv, Shiya Liu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Front-view human information is critical for reconstructing a detailed 3D human body from a single RGB/RGB-D image. However, we sometimes struggle to access the front-view portrait in practice. Thus, in this work, we propose a bidirectional network (SFNet), one branch to transform side-view RGB image to front-view and another to transform side-view depth image to front-view. Since normal maps typically encode more 3D surface detail information than depth maps, we leverage an adversarial learning framework conditioned on normal maps to improve the performance of predicting front-view depth. Our method is end-To-end trainable, resulting in high fidelity front-view RGB-D estimation and 3D reconstruction.

Original languageEnglish
Title of host publication2022 8th International Conference on Virtual Reality, ICVR 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages15-20
Number of pages6
ISBN (Electronic)9781665479110
DOIs
StatePublished - 2022
Event8th International Conference on Virtual Reality, ICVR 2022 - Nanjing, China
Duration: 26 May 202228 May 2022

Publication series

NameInternational Conference on Virtual Rehabilitation, ICVR
Volume2022-May
ISSN (Electronic)2331-9569

Conference

Conference8th International Conference on Virtual Reality, ICVR 2022
Country/TerritoryChina
CityNanjing
Period26/05/2228/05/22

Keywords

  • 3D reconstruction
  • bidirectional network
  • front-view
  • nor-mal map
  • RGB-D

Fingerprint

Dive into the research topics of 'SFNet: Clothed Human 3D Reconstruction via Single Side-To-Front View RGB-D Image'. Together they form a unique fingerprint.

Cite this