Learning Exposure Correction Via Consistency Modeling

Ntumba Elie Nsampi, Zhongyun Hu, Qing Wang

科研成果: 会议稿件论文同行评审

22 引用 (Scopus)

摘要

Existing works on exposure correction have exclusively focused on either underexposure or over-exposure. Recent work targeting both under-, and over-exposure achieved state of the art. However, it tends to produce images with inconsistent correction and sometimes color artifacts. In this paper, we propose a novel neural network architecture for exposure correction. The proposed network targets both under-, and over-exposure. We introduce a deep feature matching loss that enables the network to learn exposure-invariant representation in the feature space, which guarantees image exposure consistency. Moreover, we leverage a global attention mechanism to allow long-range interactions between distant pixels for exposure correction. This results in consistently corrected images, free of localized color distortions. Through extensive quantitative and qualitative experiments, we demonstrate that the proposed network outperforms the existing state-of-the-art. Code: https://github.com/elientumba2019/Exposure-Correction-BMVC-2021.

源语言英语
出版状态已出版 - 2021
活动32nd British Machine Vision Conference, BMVC 2021 - Virtual, Online
期限: 22 11月 202125 11月 2021

会议

会议32nd British Machine Vision Conference, BMVC 2021
Virtual, Online
时期22/11/2125/11/21

指纹

探究 'Learning Exposure Correction Via Consistency Modeling' 的科研主题。它们共同构成独一无二的指纹。

引用此