TY - JOUR
T1 - Deep HDR Imaging via A Non-Local Network
AU - Yan, Qingsen
AU - Zhang, Lei
AU - Liu, Yu
AU - Zhu, Yu
AU - Sun, Jinqiu
AU - Shi, Qinfeng
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2020
Y1 - 2020
N2 - One of the most challenging problems in reconstructing a high dynamic range (HDR) image from multiple low dynamic range (LDR) inputs is the ghosting artifacts caused by the object motion across different inputs. When the object motion is slight, most existing methods can well suppress the ghosting artifacts through aligning LDR inputs based on optical flow or detecting anomalies among them. However, they often fail to produce satisfactory results in practice, since the real object motion can be very large. In this study, we present a novel deep framework, termed NHDRRnet, which adopts an alternative direction and attempts to remove ghosting artifacts by exploiting the non-local correlation in inputs. In NHDRRnet, we first adopt an Unet architecture to fuse all inputs and map the fusion results into a low-dimensional deep feature space. Then, we feed the resultant features into a novel global non-local module which reconstructs each pixel by weighted averaging all the other pixels using the weights determined by their correspondences. By doing this, the proposed NHDRRnet is able to adaptively select the useful information (e.g., which are not corrupted by large motions or adverse lighting conditions) in the whole deep feature space to accurately reconstruct each pixel. In addition, we also incorporate a triple-pass residual module to capture more powerful local features, which proves to be effective in further boosting the performance. Extensive experiments on three benchmark datasets demonstrate the superiority of the proposed NDHRnet in terms of suppressing the ghosting artifacts in HDR reconstruction, especially when the objects have large motions.
AB - One of the most challenging problems in reconstructing a high dynamic range (HDR) image from multiple low dynamic range (LDR) inputs is the ghosting artifacts caused by the object motion across different inputs. When the object motion is slight, most existing methods can well suppress the ghosting artifacts through aligning LDR inputs based on optical flow or detecting anomalies among them. However, they often fail to produce satisfactory results in practice, since the real object motion can be very large. In this study, we present a novel deep framework, termed NHDRRnet, which adopts an alternative direction and attempts to remove ghosting artifacts by exploiting the non-local correlation in inputs. In NHDRRnet, we first adopt an Unet architecture to fuse all inputs and map the fusion results into a low-dimensional deep feature space. Then, we feed the resultant features into a novel global non-local module which reconstructs each pixel by weighted averaging all the other pixels using the weights determined by their correspondences. By doing this, the proposed NHDRRnet is able to adaptively select the useful information (e.g., which are not corrupted by large motions or adverse lighting conditions) in the whole deep feature space to accurately reconstruct each pixel. In addition, we also incorporate a triple-pass residual module to capture more powerful local features, which proves to be effective in further boosting the performance. Extensive experiments on three benchmark datasets demonstrate the superiority of the proposed NDHRnet in terms of suppressing the ghosting artifacts in HDR reconstruction, especially when the objects have large motions.
KW - High dynamic range image
KW - ghosting artifacts
KW - hybrid network
KW - non-local module
UR - http://www.scopus.com/inward/record.url?scp=85079459723&partnerID=8YFLogxK
U2 - 10.1109/TIP.2020.2971346
DO - 10.1109/TIP.2020.2971346
M3 - 文章
AN - SCOPUS:85079459723
SN - 1057-7149
VL - 29
SP - 4308
EP - 4322
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
M1 - 8989959
ER -