TY - GEN
T1 - Multi-scale dense networks for deep high dynamic range imaging
AU - Yan, Qingsen
AU - Gong, Dong
AU - Zhang, Pingping
AU - Shi, Qinfeng
AU - Sun, Jinqiu
AU - Reid, Ian
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 2019 IEEE
PY - 2019/3/4
Y1 - 2019/3/4
N2 - Generating a high dynamic range (HDR) image from a set of sequential exposures is a challenging task for dynamic scenes. The most common approaches are aligning the input images to a reference image before merging them into an HDR image, but artifacts often appear in cases of large scene motion. The state-of-the-art method using deep learning can solve this problem effectively. In this paper, we propose a novel deep convolutional neural network to generate HDR, which attempts to produce more vivid images. The key idea of our method is using the coarse-to-fine scheme to gradually reconstruct the HDR image with the multi-scale architecture and residual network. By learning the relative changes of inputs and ground truth, our method can produce not only artificial free image but also restore missing information. Furthermore, we compare to existing methods for HDR reconstruction, and show high-quality results from a set of low dynamic range (LDR) images. We evaluate the results in qualitative and quantitative experiments, our method consistently produces excellent results than existing state-of-the-art approaches in challenging scenes.
AB - Generating a high dynamic range (HDR) image from a set of sequential exposures is a challenging task for dynamic scenes. The most common approaches are aligning the input images to a reference image before merging them into an HDR image, but artifacts often appear in cases of large scene motion. The state-of-the-art method using deep learning can solve this problem effectively. In this paper, we propose a novel deep convolutional neural network to generate HDR, which attempts to produce more vivid images. The key idea of our method is using the coarse-to-fine scheme to gradually reconstruct the HDR image with the multi-scale architecture and residual network. By learning the relative changes of inputs and ground truth, our method can produce not only artificial free image but also restore missing information. Furthermore, we compare to existing methods for HDR reconstruction, and show high-quality results from a set of low dynamic range (LDR) images. We evaluate the results in qualitative and quantitative experiments, our method consistently produces excellent results than existing state-of-the-art approaches in challenging scenes.
UR - http://www.scopus.com/inward/record.url?scp=85063595800&partnerID=8YFLogxK
U2 - 10.1109/WACV.2019.00012
DO - 10.1109/WACV.2019.00012
M3 - 会议稿件
AN - SCOPUS:85063595800
T3 - Proceedings - 2019 IEEE Winter Conference on Applications of Computer Vision, WACV 2019
SP - 41
EP - 50
BT - Proceedings - 2019 IEEE Winter Conference on Applications of Computer Vision, WACV 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 19th IEEE Winter Conference on Applications of Computer Vision, WACV 2019
Y2 - 7 January 2019 through 11 January 2019
ER -