Quality and content-aware fusion optimization mechanism of infrared and visible images

Weigang Li, Aiqing Fang, Junsheng Wu, Ying Li

科研成果: 期刊稿件文章同行评审

摘要

Infrared and visible image fusion aims to generate a single fused image that contains abundant texture details and thermal radiance information. For this purpose, many unsupervised deep learning image fusion methods have been proposed, ignoring image content and quality awareness. To address these challenges, this paper presents a quality and content-aware image fusion network, termed QCANet, capable of solving the similarity fusion optimization problems, e.g., the dependence of fusion results on source images and the weighted average fusion effect. Specifically, the QCANet is composed of three modules, i.e., Image Fusion Network (IFNet), Quality-Aware Network (QANet), and Content-Aware Network (CANet). The latter two modules, a.k.a., QANet and CANet, aim to improve the content semantic awareness and quality awareness of IFNet. In addition, a new quality-aware image fusion loss is introduced to avoid the weighted average effect caused by the traditional similarity metric optimization mechanism. Therefore, the stumbling blocks of deep learning in image fusion, i.e., similarity fusion optimization problems, are significantly mitigated. Extensive experiments demonstrate that the quality and content-aware image fusion method outperforms most state-of-the-art methods.

源语言英语
页(从-至)47695-47717
页数23
期刊Multimedia Tools and Applications
82
30
DOI
出版状态已出版 - 12月 2023

指纹

探究 'Quality and content-aware fusion optimization mechanism of infrared and visible images' 的科研主题。它们共同构成独一无二的指纹。

引用此