TY - GEN
T1 - Learning Spatial and Spectral Features VIA 2D-1D Generative Adversarial Network for Hyperspectral Image Super-Resolution
AU - Jiang, Ruituo
AU - Li, Xu
AU - Mei, Shaohui
AU - Li, Lixin
AU - Yue, Shigang
AU - Zhang, Lei
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/9
Y1 - 2019/9
N2 - Three-dimensional (3D) convolutional networks have been proven to be able to explore spatial context and spectral information simultaneously for super-resolution (SR). However, such kind of network can't be practically designed very 'deep' due to the long training time and GPU memory limitations involved in 3D convolution. Instead, in this paper, spatial context and spectral information in hyperspectral images (HSIs) are explored using Two-dimensional (2D) and One-dimenional (1D) convolution, separately. Therefore, a novel 2D-1D generative adversarial network architecture (2D-1D-HSRGAN) is proposed for SR of HSIs. Specifically, the generator network consists of a spatial network and a spectral network, in which spatial network is trained with the least absolute deviations loss function to explore spatial context by 2D convolution and spectral network is trained with the spectral angle mapper (SAM) loss function to extract spectral information by 1D convolution. Experimental results over two real HSIs demonstrate that the proposed 2D-1D-HSRGAN clearly outperforms several state-of-the-art algorithms.
AB - Three-dimensional (3D) convolutional networks have been proven to be able to explore spatial context and spectral information simultaneously for super-resolution (SR). However, such kind of network can't be practically designed very 'deep' due to the long training time and GPU memory limitations involved in 3D convolution. Instead, in this paper, spatial context and spectral information in hyperspectral images (HSIs) are explored using Two-dimensional (2D) and One-dimenional (1D) convolution, separately. Therefore, a novel 2D-1D generative adversarial network architecture (2D-1D-HSRGAN) is proposed for SR of HSIs. Specifically, the generator network consists of a spatial network and a spectral network, in which spatial network is trained with the least absolute deviations loss function to explore spatial context by 2D convolution and spectral network is trained with the spectral angle mapper (SAM) loss function to extract spectral information by 1D convolution. Experimental results over two real HSIs demonstrate that the proposed 2D-1D-HSRGAN clearly outperforms several state-of-the-art algorithms.
KW - generative adversarial network
KW - Hyperspectral images
KW - super-resolution
UR - http://www.scopus.com/inward/record.url?scp=85076810874&partnerID=8YFLogxK
U2 - 10.1109/ICIP.2019.8803200
DO - 10.1109/ICIP.2019.8803200
M3 - 会议稿件
AN - SCOPUS:85076810874
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 2149
EP - 2153
BT - 2019 IEEE International Conference on Image Processing, ICIP 2019 - Proceedings
PB - IEEE Computer Society
T2 - 26th IEEE International Conference on Image Processing, ICIP 2019
Y2 - 22 September 2019 through 25 September 2019
ER -