MFIF-GAN: A new generative adversarial network for multi-focus image fusion

Yicheng Wang, Shuang Xu, Junmin Liu, Zixiang Zhao, Chunxia Zhang, Jiangshe Zhang

科研成果: 期刊稿件文章同行评审

48 引用 (Scopus)

摘要

Multi-Focus Image Fusion (MFIF) is a promising image enhancement technique to generate all-in-focus images meeting visual needs, and it is a precondition for other computer vision tasks. One emergent research trend in MFIF involves approaches to avoiding a defocus spread effect (DSE) around a focus/defocus boundary (FDB). This study proposes a generative adversarial network for MFIF tasks called MFIF-GAN, to attenuate the DSE by generating focus maps in which the foreground region is correctly larger than corresponding objects. A Squeeze and Excitation residual module is employed in the proposed network. By combining prior knowledge of a training condition, the network is trained on a synthetic dataset based on an α-matte model. In addition, reconstruction and gradient regularization terms are combined in the loss functions to enhance boundary details and improve the quality of fused images. Extensive experiments demonstrate that the MFIF-GAN outperforms eight state-of-the-art (SOTA) methods in visual perception and quantitative analysis, as well as efficiency. Moreover, an edge diffusion and contraction module is proposed to verify that focus maps generated by our method are accurate at the pixel level.

源语言英语
文章编号116295
期刊Signal Processing: Image Communication
96
DOI
出版状态已出版 - 8月 2021
已对外发布

指纹

探究 'MFIF-GAN: A new generative adversarial network for multi-focus image fusion' 的科研主题。它们共同构成独一无二的指纹。

引用此