TY - JOUR
T1 - Progressive Feature Interleaved Fusion Network for Remote-Sensing Image Salient Object Detection
AU - Han, Pengfei
AU - Zhao, Bin
AU - Li, Xuelong
N1 - Publisher Copyright:
© 1980-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Salient object detection (SOD) has made significant strides in natural scene images (NSIs) in the span of the past few decades. However, extending these approaches for remote-sensing images (RSIs) faces challenges due to their complex backgrounds, complicated edges, irregular topology, and multiscale object variations, which hinder performance. Existing RSI-SOD techniques are unable to accurately detect salient objects while preserving detailed boundaries, and their computational inefficiency limits their practicality. To overcome these challenges, we entail the development of a progressive feature interleaved framework (PROFILE) in RSI-SOD. In particular, we leverage the interleaved association of the convolutional neural network (CNN) and Transformer (IACTer) to obtain global semantic relations and spatial details. To handle object scale variation, we design a lightweight plug-and-play multiscale hierarchical channel-spatial collaborative feature enhancement module (MHCCF), which can boost the representation of features regarding the relevant region, while identifying the precise location details about the salient region. Finally, a bi-directional consistency constraint module (BCCM) is developed, which can be integrated into the training of arbitrary SOD and segmentation networks to efficiently locate salient regions with refined structures and clear demarcations. Experiments demonstrate that our PROFILE surpasses 20 cutting-edge SOD methods, proving its ability to enhance the accuracy and integrity of SOD in complex backgrounds, such as illumination and shadows.
AB - Salient object detection (SOD) has made significant strides in natural scene images (NSIs) in the span of the past few decades. However, extending these approaches for remote-sensing images (RSIs) faces challenges due to their complex backgrounds, complicated edges, irregular topology, and multiscale object variations, which hinder performance. Existing RSI-SOD techniques are unable to accurately detect salient objects while preserving detailed boundaries, and their computational inefficiency limits their practicality. To overcome these challenges, we entail the development of a progressive feature interleaved framework (PROFILE) in RSI-SOD. In particular, we leverage the interleaved association of the convolutional neural network (CNN) and Transformer (IACTer) to obtain global semantic relations and spatial details. To handle object scale variation, we design a lightweight plug-and-play multiscale hierarchical channel-spatial collaborative feature enhancement module (MHCCF), which can boost the representation of features regarding the relevant region, while identifying the precise location details about the salient region. Finally, a bi-directional consistency constraint module (BCCM) is developed, which can be integrated into the training of arbitrary SOD and segmentation networks to efficiently locate salient regions with refined structures and clear demarcations. Experiments demonstrate that our PROFILE surpasses 20 cutting-edge SOD methods, proving its ability to enhance the accuracy and integrity of SOD in complex backgrounds, such as illumination and shadows.
KW - Bi-directional consistency constraint
KW - feature enhancement
KW - optical remote-sensing image (RSI)
KW - saliency object detection
UR - http://www.scopus.com/inward/record.url?scp=85179804636&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2023.3339970
DO - 10.1109/TGRS.2023.3339970
M3 - 文章
AN - SCOPUS:85179804636
SN - 0196-2892
VL - 62
SP - 1
EP - 14
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
M1 - 5500414
ER -