跳到主要导航 跳到搜索 跳到主要内容

Infrared and visible image fusion via mutual information maximization

  • Aiqing Fang
  • , Junsheng Wu
  • , Ying Li
  • , Ruimin Qiao

科研成果: 期刊稿件文章同行评审

16 引用 (Scopus)

摘要

Traditional image fusion methods based on deep learning generally measure the similarity between the fusion results and the source images, ignoring the harmful information of source images. This paper presents a simple-yet-effective self-supervised image fusion optimization mechanism via directly maximizing the mutual information between the fused image and image samples, including positive and negative samples. The fusion optimization of positive samples has three steps, including visual fidelity item, quality perception item, and semantic perception item loss functions, aiming to reduce the distance between the fused representation and the real image quality. The fusion optimization of negative samples aims to enlarge the distance between the fusion results and the degraded image. Following InfoNCE, our framework is optimized via a surrogate contrastive loss, where the positive and negative selection underpins the real quality and visual fidelity information of fusion representation learning. Therefore, the stumbling blocks of deep learning in image fusion, i.e., similarity fusion optimization problems, are significantly mitigated. Extensive experiments demonstrate that fusion results neatly outperforms the state-of-the-art fusion optimization mechanisms in most metrics.

源语言英语
文章编号103683
期刊Computer Vision and Image Understanding
231
DOI
出版状态已出版 - 6月 2023

指纹

探究 'Infrared and visible image fusion via mutual information maximization' 的科研主题。它们共同构成独一无二的指纹。

引用此