Diffusion-Augmented Cross-Domain Prototypical Knowledge Distillation for Few-Shot Learning in Hyperspectral Image Classification

  • Chen Ding
  • , Sirui Zheng
  • , Mengmeng Zheng
  • , Yizhou Dong
  • , Wenqiang Hua
  • , Wei Wei
  • , Lei Zhang
  • , Yanning Zhang

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Cross-domain few-shot learning (CDFSL) has demonstrated remarkable new class recognition capabilities in hyperspectral image classification (HSIC) tasks. However, existing domain adaptation methods face two critical challenges in the cross-domain feature alignment process: first, the domain shift leads to misaligned feature transfer and diminished classification accuracy, and second, the intraclass feature dispersion and interclass boundary blurring in few-shot tasks result in degraded classification performance for novel classes. Moreover, the impact of redundant and noisy data on model discriminability is rarely considered in existing approaches. To solve these issues, this article proposes a cross-domain FSL HSIC method based on diffusion-augmented prototype knowledge distillation. First, we introduce a diffusion-augmented unsupervised domain adaptation pretraining framework to address the domain shift by performing a domain-adversarial (DA) denoising and reconstruction task using visible source data and masked target data. Second, our dual-branch spatial–spectral attention (DB-SSA) captures global and local spectral–spatial dependencies to enhance feature representation. Then, the proposed global–local prototype knowledge distillation (GL-PKD) performs global prototype alignment while conducting local contrastive learning, addressing feature dispersion and boundary ambiguity. Finally, a dynamic learning strategy prioritizes feature alignment early, gradually strengthens classification supervision through adaptive loss weights, and incorporates a signal-to-noise ratio (SNR)-enhanced loss to effectively mitigate noise interference. The experimental results on three HSI datasets demonstrate the superiority and effectiveness of the proposed cross-domain FSL based on diffusion-augmented prototype knowledge distillation (DAPKD-CFSL).

Original languageEnglish
Article number5518319
JournalIEEE Transactions on Geoscience and Remote Sensing
Volume63
DOIs
StatePublished - 2025

Keywords

  • Cross domain
  • diffusion model
  • few-shot learning (FSL)
  • hyperspectral image classification (HSIC)
  • knowledge distillation
  • prototypical learning

Fingerprint

Dive into the research topics of 'Diffusion-Augmented Cross-Domain Prototypical Knowledge Distillation for Few-Shot Learning in Hyperspectral Image Classification'. Together they form a unique fingerprint.

Cite this