Texture-Aware Causal Feature Extraction Network for Multimodal Remote Sensing Data Classification

Zhengyi Xu, Wen Jiang, Jie Geng

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

The pixel-level classification of multimodal remote sensing (RS) images plays a crucial role in the intelligent interpretation of RS data. However, existing methods that mainly focus on feature interaction and fusion fail to address the challenges posed by confounders - brought by sensor imaging bias, limiting their performance. In this article, we introduce causal inference into intelligent interpretation of RS and propose a new texture-aware causal feature extraction network (TeACFNet) for pixel-level fusion classification. Specifically, we propose a two-stage causal feature extraction (CFE) framework that helps networks learn more explicit class representations by capturing the causal relationships between multimodal heterogeneous data. In addition, to solve the problem of low-resolution land cover feature representation in RS images, we propose the refined statistical texture extraction (ReSTE) module. This module integrates the semantics of statistical textures in shallow feature maps through feature refinement, quantization, and encoding. Extensive experiments on two publicly available datasets with different modalities, including Houston2013 and Berlin datasets, demonstrate the remarkable effectiveness of our proposed method, which reaches a new state-of-the-art.

Original languageEnglish
Article number5103512
Pages (from-to)1-12
Number of pages12
JournalIEEE Transactions on Geoscience and Remote Sensing
Volume62
DOIs
StatePublished - 2024

Keywords

  • Causal feature extraction
  • feature fusion
  • image pixel-level classification
  • multimodal remote sensing (RS)
  • texture representation learning

Fingerprint

Dive into the research topics of 'Texture-Aware Causal Feature Extraction Network for Multimodal Remote Sensing Data Classification'. Together they form a unique fingerprint.

Cite this