TY - JOUR
T1 - Hyperspectral and LiDAR Data Classification Using Spatial Context and De-Redundant Fusion Network
AU - Dong, Lijia
AU - Jiang, Wen
AU - Geng, Jie
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The utilization of multimodal data from multisensors (e.g., hyperspectral and light detection ranging (LiDAR) data) to classify ground objects has been an important topic in remote sensing interpretation. However, complex background leads to the difficulty in extracting context relationships; at the same time, redundancy and noise among multimodal data bring great challenges to accurate classification. In this letter, we propose a novel spatial context and de-redundant fusion network (SCDNet) to fuse hyperspectral and LiDAR data for land cover classification. Specifically, a multiscale attention fusion module (MSAF) is developed in the feature extraction stage, which adaptively fuses global and local information of different scales to obtain a more accurate spatial context. In the feature fusion stage, a fusion module based on gated mechanism is proposed, which can remove the redundant information of multimode data and obtain discriminative fusion features. We design a series of comparisons and ablation experiments on the Houston2013 dataset and Trento dataset, and the results demonstrate the effectiveness of the proposed method.
AB - The utilization of multimodal data from multisensors (e.g., hyperspectral and light detection ranging (LiDAR) data) to classify ground objects has been an important topic in remote sensing interpretation. However, complex background leads to the difficulty in extracting context relationships; at the same time, redundancy and noise among multimodal data bring great challenges to accurate classification. In this letter, we propose a novel spatial context and de-redundant fusion network (SCDNet) to fuse hyperspectral and LiDAR data for land cover classification. Specifically, a multiscale attention fusion module (MSAF) is developed in the feature extraction stage, which adaptively fuses global and local information of different scales to obtain a more accurate spatial context. In the feature fusion stage, a fusion module based on gated mechanism is proposed, which can remove the redundant information of multimode data and obtain discriminative fusion features. We design a series of comparisons and ablation experiments on the Houston2013 dataset and Trento dataset, and the results demonstrate the effectiveness of the proposed method.
KW - De-redundant
KW - hyperspectral and light detection ranging (LiDAR) image fusion
KW - multiscale
KW - spatial context
UR - http://www.scopus.com/inward/record.url?scp=85174813642&partnerID=8YFLogxK
U2 - 10.1109/LGRS.2023.3322711
DO - 10.1109/LGRS.2023.3322711
M3 - 文章
AN - SCOPUS:85174813642
SN - 1545-598X
VL - 20
JO - IEEE Geoscience and Remote Sensing Letters
JF - IEEE Geoscience and Remote Sensing Letters
M1 - 5510305
ER -