Modal Feature Disentanglement and Contribution Estimation for Multi-Modality Image Fusion

Tao Zhang, Xiaogang Yang, Ruitao Lu, Dingwen Zhang, Xueli Xie, Zhengjie Zhu

Research output: Contribution to journalArticlepeer-review

Abstract

Multi-modality image fusion task (MMIF) aims at fusing complementary information from different modalities, e.g., salient objects and texture details, to improve image quality and information comprehensiveness. Most current MMIF methods adopt a 'black-box' decoder to generate fused images, which lead to insufficient interpretability and difficulty in training. To deal with these problems, we convert MMIF into a modality contribution estimation task, and propose a novel self-supervised fusion network based on modal feature disentanglement and contribution estimation, named MFDCE-Fuse. First, we construct a contrast-learning auto-encoder, which seamlessly integrates the strengths of CNN and Swin Transformer to capture the long-range global features and local texture details, and designs the contrastive reconstruction loss to promote the uniqueness and non-redundancy of the captured features. Second, considering that modal redundant features interfere with modal contribution estimation, we propose a feature disentangled representation framework based on contrastive constraint for obtaining modal-common and modal-private features. And the contribution of modal images to the MMIF is evaluated through the proportion of modal-private features, which enhances the interpretability of the fusion process and image quality of the fused image. Furthermore, an innovative weighted perceptual loss and feature disentanglement contrastive loss are constructed to guarantee that the private feature remains intact. Qualitative and quantitative experimental results demonstrate the applicability and generalization of MFDCE-Fuse across multiple fusion tasks involving visible-infrared and medical image fusion.

Original languageEnglish
JournalIEEE Transactions on Instrumentation and Measurement
DOIs
StateAccepted/In press - 2025

Keywords

  • Contrastive learning
  • contribution estimation
  • disentangled representation
  • feature disentanglement
  • image fusion

Fingerprint

Dive into the research topics of 'Modal Feature Disentanglement and Contribution Estimation for Multi-Modality Image Fusion'. Together they form a unique fingerprint.

Cite this