跳到主要导航 跳到搜索 跳到主要内容

U-SAS: U-Shape Network with Multilevel Enhancement and Global Decoding for Synthetic Aperture Sonar Image Semantic Segmentation

  • Jiayuan Li
  • , Zhen Wang
  • , Zhuhong You
  • , Zhengyang Zhao
  • , Zhanbin Yuan

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Compared with side-scan sonar (SSS) and forward-looking sonar (FLS), synthetic aperture sonar (SAS) devices can generate high-resolution underwater images, which is important for marine topographic mapping. Nevertheless, existing deep learning (DL) methods face challenges in extracting detailed feature information from underwater SAS images for semantic segmentation tasks, primarily due to the significant interference from the complex underwater environment and seabed reverberation noise. To address these challenges, we propose a novel network named U-SAS for SAS image semantic segmentation, which uses the hybrid convolution and attention mechanism architecture to extract rich multilevel features from SAS images. Specifically, U-SAS separates the extracted features into complementary representations, namely salient features and abstract location features. To enhance the distinctive representation of multilevel features, U-SAS incorporates the max-min module (MMM) and convolutional block attention mechanism (CBAM). The MMM effectively emphasizes robust and significant underwater acoustic features, while the CBAM suppresses seabed reverberation noise and enhances the representation of positional information. Besides, we construct the global decoding module (GDM) to fuse salient features and abstract location features to enhance the correlation between local features and achieve global semantic understanding. To verify the effectiveness and feasibility of the proposed U-SAS network, we conducted extensive experiments on the SAS image dataset of complex underwater scenes. The experimental results show that U-SAS achieves 78.58% and 58.59% of mean accuracy (mAcc) and mean intersection over union (mIoU), respectively, which outperforms other state-of-the-art methods. Implementation codes will be available on https://github.com/darkseid-arch/U-SAS.

源语言英语
页(从-至)1799-1813
页数15
期刊IEEE Sensors Journal
25
1
DOI
出版状态已出版 - 2025
已对外发布

联合国可持续发展目标

此成果有助于实现下列可持续发展目标:

  1. 可持续发展目标 14 - 水下生物
    可持续发展目标 14 水下生物

指纹

探究 'U-SAS: U-Shape Network with Multilevel Enhancement and Global Decoding for Synthetic Aperture Sonar Image Semantic Segmentation' 的科研主题。它们共同构成独一无二的指纹。

引用此