SIDNet: Learning Shading-Aware Illumination Descriptor for Image Harmonization

Zhongyun Hu, Ntumba Elie Nsampi, Xue Wang, Qing Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Image harmonization aims at adjusting the appearance of the foreground to make it more compatible with the background. Without exploring background illumination and its effects on the foreground elements, existing works are incapable of generating a realistic foreground shading. In this paper, we decompose the image harmonization task into two sub-problems: 1) illumination estimation of the background image and 2) re-rendering of foreground objects under background illumination. Before solving these two sub-problems, we first learn a shading-aware illumination descriptor via a well-designed neural rendering framework, of which the key is a shading bases module that generates multiple shading bases from the foreground image. Then we design a background illumination estimation module to extract the illumination descriptor from the background. Finally, the Shading-aware Illumination Descriptor is used in conjunction with the neural rendering framework (SIDNet) to produce the harmonized foreground image containing a novel harmonized shading. Moreover, we construct a photo-realistic synthetic image harmonization dataset that contains numerous shading variations with image-based lighting. Extensive experiments on both synthetic and real data demonstrate the superiority of the proposed method, especially in dealing with foreground shadings.

Original languageEnglish
Pages (from-to)1290-1302
Number of pages13
JournalIEEE Transactions on Emerging Topics in Computational Intelligence
Volume8
Issue number2
DOIs
StatePublished - 1 Apr 2024

Keywords

  • illumination
  • Image harmonization
  • neural rendering
  • shading field

Fingerprint

Dive into the research topics of 'SIDNet: Learning Shading-Aware Illumination Descriptor for Image Harmonization'. Together they form a unique fingerprint.

Cite this