Fine-Tuning SAM for Forward-Looking Sonar with Collaborative Prompts and Embedding

Jiayuan Li, Zhen Wang, Nan Xu, Zhuhong You

Research output: Contribution to journalArticlepeer-review

Abstract

The Segment Anything Model (SAM) represents a significant advancement in semantic segmentation, particularly for natural images, but encounters notable limitations when applied to forward-looking sonar (FLS) images. The primary challenges lie in the inherent boundary ambiguity of FLS images, which complicates the use of prompt strategies for accurate boundary delineation, and the lack of effective interaction between prompts and image features. In this letter, we introduce a collaborative prompting strategy to address these issues by generating dense prompt embeddings and sonar tokens that focus on contour and boundary features, thereby replacing the original dense prompt embedding and IoU token. To further enhance segmentation, we employ embedding compensation techniques based on Mamba and KAN, which increase boundary information to image embedings and improve the fusion of prompts within image embeddings. We conducted comprehensive experiments, including comparative analyses and ablation studies, to validate the superiority of our proposed approach. Results show that our method significantly improves segmentation performance for FLS images, effectively addressing boundary ambiguity and optimizing prompt utilization.

Original languageEnglish
JournalIEEE Geoscience and Remote Sensing Letters
DOIs
StateAccepted/In press - 2025

Keywords

  • Semantic segmentation
  • collaborative prompting
  • embedding compensation
  • forward-looking sonar (FLS)
  • multimodal remote sensing

Fingerprint

Dive into the research topics of 'Fine-Tuning SAM for Forward-Looking Sonar with Collaborative Prompts and Embedding'. Together they form a unique fingerprint.

Cite this