Visual Prompting Unsupervised Domain Adaptation for Medical Image Segmentation

Ziru Lu, Yizhe Zhang, Yi Zhou, Geng Chen, Tao Zhou

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Unsupervised domain adaptation (UDA), aimed at improving the segmentation performance of deep models on un-labeled data, has attracted considerable attention. Recently, the Segment Anything Model (SAM) has gained widespread attention in various scenarios. In this work, we propose a Visual Prompting UDA (VP-UDA) framework for medical image segmentation, which leverages SAM's ability to improve overall generalization performance. Specifically, we first present a Hybrid Prompting Strategy (HPS) to empower SAM with profound downstream task-specific knowledge. Moreover, a SAM-based Guidance Learning (SGL) scheme is proposed to enhance the learning process of the segmentation model. Then, we propose a Consistency-based Pseudo-label Selection (CPS) strategy to discern and exclude the outlier points in the target feature space. Furthermore, a Frequency Prior-induced Fusion (FPF) module is proposed to effectively integrate the results from SAM and the segmentation model. Experimental results show the effectiveness and superiority of our model over other state-of-the-art UDA methods.

源语言英语
主期刊名ISBI 2025 - 2025 IEEE 22nd International Symposium on Biomedical Imaging, Proceedings
出版商IEEE Computer Society
ISBN(电子版)9798331520526
DOI
出版状态已出版 - 2025
活动22nd IEEE International Symposium on Biomedical Imaging, ISBI 2025 - Houston, 美国
期限: 14 4月 202517 4月 2025

出版系列

姓名Proceedings - International Symposium on Biomedical Imaging
ISSN(印刷版)1945-7928
ISSN(电子版)1945-8452

会议

会议22nd IEEE International Symposium on Biomedical Imaging, ISBI 2025
国家/地区美国
Houston
时期14/04/2517/04/25

指纹

探究 'Visual Prompting Unsupervised Domain Adaptation for Medical Image Segmentation' 的科研主题。它们共同构成独一无二的指纹。

引用此