MFIF-GAN: A new generative adversarial network for multi-focus image fusion

Yicheng Wang, Shuang Xu, Junmin Liu, Zixiang Zhao, Chunxia Zhang, Jiangshe Zhang

Research output: Contribution to journalArticlepeer-review

49 Scopus citations

Abstract

Multi-Focus Image Fusion (MFIF) is a promising image enhancement technique to generate all-in-focus images meeting visual needs, and it is a precondition for other computer vision tasks. One emergent research trend in MFIF involves approaches to avoiding a defocus spread effect (DSE) around a focus/defocus boundary (FDB). This study proposes a generative adversarial network for MFIF tasks called MFIF-GAN, to attenuate the DSE by generating focus maps in which the foreground region is correctly larger than corresponding objects. A Squeeze and Excitation residual module is employed in the proposed network. By combining prior knowledge of a training condition, the network is trained on a synthetic dataset based on an α-matte model. In addition, reconstruction and gradient regularization terms are combined in the loss functions to enhance boundary details and improve the quality of fused images. Extensive experiments demonstrate that the MFIF-GAN outperforms eight state-of-the-art (SOTA) methods in visual perception and quantitative analysis, as well as efficiency. Moreover, an edge diffusion and contraction module is proposed to verify that focus maps generated by our method are accurate at the pixel level.

Original languageEnglish
Article number116295
JournalSignal Processing: Image Communication
Volume96
DOIs
StatePublished - Aug 2021
Externally publishedYes

Keywords

  • Deep learning
  • Defocus spread effect
  • Generative adversarial network
  • Multi-focus image fusion

Fingerprint

Dive into the research topics of 'MFIF-GAN: A new generative adversarial network for multi-focus image fusion'. Together they form a unique fingerprint.

Cite this