Learning Exposure Correction Via Consistency Modeling

Ntumba Elie Nsampi, Zhongyun Hu, Qing Wang

Research output: Contribution to conferencePaperpeer-review

22 Scopus citations

Abstract

Existing works on exposure correction have exclusively focused on either underexposure or over-exposure. Recent work targeting both under-, and over-exposure achieved state of the art. However, it tends to produce images with inconsistent correction and sometimes color artifacts. In this paper, we propose a novel neural network architecture for exposure correction. The proposed network targets both under-, and over-exposure. We introduce a deep feature matching loss that enables the network to learn exposure-invariant representation in the feature space, which guarantees image exposure consistency. Moreover, we leverage a global attention mechanism to allow long-range interactions between distant pixels for exposure correction. This results in consistently corrected images, free of localized color distortions. Through extensive quantitative and qualitative experiments, we demonstrate that the proposed network outperforms the existing state-of-the-art. Code: https://github.com/elientumba2019/Exposure-Correction-BMVC-2021.

Original languageEnglish
StatePublished - 2021
Event32nd British Machine Vision Conference, BMVC 2021 - Virtual, Online
Duration: 22 Nov 202125 Nov 2021

Conference

Conference32nd British Machine Vision Conference, BMVC 2021
CityVirtual, Online
Period22/11/2125/11/21

Fingerprint

Dive into the research topics of 'Learning Exposure Correction Via Consistency Modeling'. Together they form a unique fingerprint.

Cite this