TY - JOUR
T1 - SSAF-Net
T2 - A Spatial-Spectral Adaptive Fusion Network for Hyperspectral Unmixing with Endmember Variability
AU - Gao, Wei
AU - Yang, Jingyu
AU - Zhang, Yu
AU - Akoudad, Youssef
AU - Chen, Jie
N1 - Publisher Copyright:
© 1980-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Deep learning has recently garnered substantial interest in hyperspectral unmixing due to its exceptional learning capabilities. In particular, unsupervised unmixing methods based on autoencoders have become a research hotspot, with many existing networks focusing on the fusion of spatial and spectral information. However, the diversity of fusion structures makes it challenging to select appropriate modules that meet unmixing requirements, while the issue of endmember variability is often neglected. In this paper, we propose a novel spatial-spectral adaptive fusion network (SSAF-Net) that accounts for endmember variability. The network consists of two cascaded encoders and a deep generative model (DGM) based on a variational autoencoder. The encoders perform local spatial-spectral information fusion through channel and spatial attention mechanisms, respectively, while self-perception loss facilitates global information fusion during the cascading process. In addition, we address endmember variability using a proportional perturbation model (PPM), learning the necessary endmember parameters through an elaborately designed DGM. Our SSAF-Net learns both endmember variability and the corresponding abundances in an unsupervised manner. Experimental results on a synthetic dataset and real-world datasets validate the significant superiority of SSAF-Net compared to other methods. The code for this work is available at https://github.com/yjysimply/SSAF-Net.
AB - Deep learning has recently garnered substantial interest in hyperspectral unmixing due to its exceptional learning capabilities. In particular, unsupervised unmixing methods based on autoencoders have become a research hotspot, with many existing networks focusing on the fusion of spatial and spectral information. However, the diversity of fusion structures makes it challenging to select appropriate modules that meet unmixing requirements, while the issue of endmember variability is often neglected. In this paper, we propose a novel spatial-spectral adaptive fusion network (SSAF-Net) that accounts for endmember variability. The network consists of two cascaded encoders and a deep generative model (DGM) based on a variational autoencoder. The encoders perform local spatial-spectral information fusion through channel and spatial attention mechanisms, respectively, while self-perception loss facilitates global information fusion during the cascading process. In addition, we address endmember variability using a proportional perturbation model (PPM), learning the necessary endmember parameters through an elaborately designed DGM. Our SSAF-Net learns both endmember variability and the corresponding abundances in an unsupervised manner. Experimental results on a synthetic dataset and real-world datasets validate the significant superiority of SSAF-Net compared to other methods. The code for this work is available at https://github.com/yjysimply/SSAF-Net.
KW - deep learning
KW - endmember variability
KW - Hyperspectral unmixing
KW - spatial-spectral fusion
KW - variational inference
UR - http://www.scopus.com/inward/record.url?scp=85219512405&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2025.3544037
DO - 10.1109/TGRS.2025.3544037
M3 - 文章
AN - SCOPUS:85219512405
SN - 0196-2892
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
ER -