Abstract
Infrared and visible image fusion aims to generate a single fused image that contains abundant texture details and thermal radiance information. For this purpose, many unsupervised deep learning image fusion methods have been proposed, ignoring image content and quality awareness. To address these challenges, this paper presents a quality and content-aware image fusion network, termed QCANet, capable of solving the similarity fusion optimization problems, e.g., the dependence of fusion results on source images and the weighted average fusion effect. Specifically, the QCANet is composed of three modules, i.e., Image Fusion Network (IFNet), Quality-Aware Network (QANet), and Content-Aware Network (CANet). The latter two modules, a.k.a., QANet and CANet, aim to improve the content semantic awareness and quality awareness of IFNet. In addition, a new quality-aware image fusion loss is introduced to avoid the weighted average effect caused by the traditional similarity metric optimization mechanism. Therefore, the stumbling blocks of deep learning in image fusion, i.e., similarity fusion optimization problems, are significantly mitigated. Extensive experiments demonstrate that the quality and content-aware image fusion method outperforms most state-of-the-art methods.
| Original language | English |
|---|---|
| Pages (from-to) | 47695-47717 |
| Number of pages | 23 |
| Journal | Multimedia Tools and Applications |
| Volume | 82 |
| Issue number | 30 |
| DOIs | |
| State | Published - Dec 2023 |
Keywords
- Content-aware mechanism
- Deep learning
- Image fusion
- Quality-aware mechanism
Fingerprint
Dive into the research topics of 'Quality and content-aware fusion optimization mechanism of infrared and visible images'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver