Perceptual Loss-Constrained Adversarial Autoencoder Networks for Hyperspectral Unmixing

Min Zhao, Mou Wang, Jie Chen, Susanto Rahardja

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Recently, the use of a deep autoencoder-based method in blind spectral unmixing has attracted great attention as the method can achieve superior performance. However, most autoencoder-based unmixing methods use non-structured reconstruction loss to train networks, leading to the ignorance of band-to-band-dependent characteristics and fine-grained information. To cope with this issue, we propose a general perceptual loss-constrained adversarial autoencoder network for hyperspectral unmixing. Specifically, the adversarial training process is used to update our framework. The discriminate network is found to be efficient in discovering the discrepancy between the reconstructed pixels and their corresponding ground truth. Moreover, the general perceptual loss is combined with the adversarial loss to further improve the consistency of high-level representations. Ablation studies verify the effectiveness of the proposed components of our framework, and experiments with both synthetic and real data illustrate the superiority of our framework when compared with other competing methods.

Original languageEnglish
JournalIEEE Geoscience and Remote Sensing Letters
Volume19
DOIs
StatePublished - 2022

Keywords

  • Autoencoder
  • fine structure
  • generative adversarial network (GAN)
  • hyperspectral unmixing
  • perceptual loss

Fingerprint

Dive into the research topics of 'Perceptual Loss-Constrained Adversarial Autoencoder Networks for Hyperspectral Unmixing'. Together they form a unique fingerprint.

Cite this