Quality and content-aware fusion optimization mechanism of infrared and visible images

Weigang Li, Aiqing Fang, Junsheng Wu, Ying Li

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Infrared and visible image fusion aims to generate a single fused image that contains abundant texture details and thermal radiance information. For this purpose, many unsupervised deep learning image fusion methods have been proposed, ignoring image content and quality awareness. To address these challenges, this paper presents a quality and content-aware image fusion network, termed QCANet, capable of solving the similarity fusion optimization problems, e.g., the dependence of fusion results on source images and the weighted average fusion effect. Specifically, the QCANet is composed of three modules, i.e., Image Fusion Network (IFNet), Quality-Aware Network (QANet), and Content-Aware Network (CANet). The latter two modules, a.k.a., QANet and CANet, aim to improve the content semantic awareness and quality awareness of IFNet. In addition, a new quality-aware image fusion loss is introduced to avoid the weighted average effect caused by the traditional similarity metric optimization mechanism. Therefore, the stumbling blocks of deep learning in image fusion, i.e., similarity fusion optimization problems, are significantly mitigated. Extensive experiments demonstrate that the quality and content-aware image fusion method outperforms most state-of-the-art methods.

Original languageEnglish
Pages (from-to)47695-47717
Number of pages23
JournalMultimedia Tools and Applications
Volume82
Issue number30
DOIs
StatePublished - Dec 2023

Keywords

  • Content-aware mechanism
  • Deep learning
  • Image fusion
  • Quality-aware mechanism

Fingerprint

Dive into the research topics of 'Quality and content-aware fusion optimization mechanism of infrared and visible images'. Together they form a unique fingerprint.

Cite this