Modal Feature Disentanglement and Contribution Estimation for Multimodality Image Fusion

Tao Zhang, Xiaogang Yang, Ruitao Lu, Dingwen Zhang, Xueli Xie, Zhengjie Zhu

Research output: Contribution to journalArticlepeer-review

Abstract

Multimodality image fusion (MMIF) tasks aim at fusing complementary information from different modalities, e.g., salient objects and texture details, to improve image quality and information comprehensiveness. Most current MMIF methods adopt a "black-box"decoder to generate fused images, which leads to insufficient interpretability and difficulty in training. To deal with these problems, we convert MMIF into a modality contribution estimation task and propose a novel self-supervised fusion network based on modal feature disentanglement and contribution estimation, named MFDCE-Fuse. First, we construct a contrast-learning autoencoder, which seamlessly integrates the strengths of CNN and Swin Transformer to capture the long-range global features and local texture details and designs the contrastive reconstruction loss to promote the uniqueness and nonredundancy of the captured features. Second, considering that modal redundant features interfere with modal contribution estimation, we propose a feature disentangled representation framework based on contrastive constraint for obtaining modal-common and modal-private features. The contribution of modal images to the MMIF is evaluated through the proportion of modal-private features, which enhances the interpretability of the fusion process and image quality of the fused image. Furthermore, an innovative weighted perceptual loss and feature disentanglement contrastive loss are constructed to guarantee that the private feature remains intact. Qualitative and quantitative experimental results demonstrate the applicability and generalization of MFDCE-Fuse across multiple fusion tasks involving visible infrared (VIF) and medical image fusion (MIF).

Original languageEnglish
Article number5012416
JournalIEEE Transactions on Instrumentation and Measurement
Volume74
DOIs
StatePublished - 2025

Keywords

  • Contrastive learning
  • contribution estimation
  • disentangled representation
  • feature disentanglement
  • image fusion

Fingerprint

Dive into the research topics of 'Modal Feature Disentanglement and Contribution Estimation for Multimodality Image Fusion'. Together they form a unique fingerprint.

Cite this