TY - JOUR
T1 - From Dynamic to Static
T2 - Stepwisely Generate HDR Image for Ghost Removal
AU - Yan, Qingsen
AU - Yang, Kangzhen
AU - Hu, Tao
AU - Chen, Genggeng
AU - Dai, Kexin
AU - Wu, Peng
AU - Ren, Wenqi
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Generating high-quality high dynamic range (HDR) images in dynamic scenes is particularly challenging due to the influence of large motion. Despite the effectiveness of existing deep learning methods, they still suffer from ghosting artifacts when saturation and motion coexist. Inspired by fusion on static scenes, we propose an inpainting and fusion strategy to enhance the quality of the generated HDR images. The proposed method consists of pseudo-static LDR generation and detail-guided HDR generation, which creates pseudo-static images and then generates ghost-free HDR images. Specifically, the pseudo-static LDR generation network utilizes semantic information to identify the motion regions, and employs a diffusion model-based inpainting approach to produce pseudo-static LDR images that closely resemble real scenes. In the detail-guided HDR generation network, we employ a detail enhancement module to refine diverse high-frequency features with detailed information extracted from pseudo-static LDR images, which effectively enhances the visual quality. Extensive experiments on four public datasets demonstrate the superiority of the proposed method, both quantitatively and qualitatively.
AB - Generating high-quality high dynamic range (HDR) images in dynamic scenes is particularly challenging due to the influence of large motion. Despite the effectiveness of existing deep learning methods, they still suffer from ghosting artifacts when saturation and motion coexist. Inspired by fusion on static scenes, we propose an inpainting and fusion strategy to enhance the quality of the generated HDR images. The proposed method consists of pseudo-static LDR generation and detail-guided HDR generation, which creates pseudo-static images and then generates ghost-free HDR images. Specifically, the pseudo-static LDR generation network utilizes semantic information to identify the motion regions, and employs a diffusion model-based inpainting approach to produce pseudo-static LDR images that closely resemble real scenes. In the detail-guided HDR generation network, we employ a detail enhancement module to refine diverse high-frequency features with detailed information extracted from pseudo-static LDR images, which effectively enhances the visual quality. Extensive experiments on four public datasets demonstrate the superiority of the proposed method, both quantitatively and qualitatively.
KW - ghosting artifacts
KW - High dynamic range image
KW - multi-exposed imaging
KW - segment anything model
UR - http://www.scopus.com/inward/record.url?scp=85205438843&partnerID=8YFLogxK
U2 - 10.1109/TCSVT.2024.3467259
DO - 10.1109/TCSVT.2024.3467259
M3 - 文章
AN - SCOPUS:85205438843
SN - 1051-8215
VL - 35
SP - 1409
EP - 1421
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
IS - 2
ER -