TY - JOUR
T1 - Self-Supervised Interactive Embedding for One-Shot Organ Segmentation
AU - Yang, Yang
AU - Wang, Bo
AU - Zhang, Dingwen
AU - Yuan, Yixuan
AU - Yan, Qingsen
AU - Zhao, Shijie
AU - You, Zheng
AU - Han, Junwei
N1 - Publisher Copyright:
© 1964-2012 IEEE.
PY - 2023/10/1
Y1 - 2023/10/1
N2 - One-shot organ segmentation (OS2) aims at segmenting the desired organ regions from the input medical imaging data with only one pre-annotated example as the reference. By using the minimal annotation data to facilitate organ segmentation, OS2 receives great attention in the medical image analysis community due to its weak requirement on human annotation. In OS2, one core issue is to explore the mutual information between the support (reference slice) and the query (test slice). Existing methods rely heavily on the similarity between slices, and additional slice allocation mechanisms need to be designed to reduce the impact of the similarity between slices on the segmentation performance. To address this issue, we build a novel support-query interactive embedding (SQIE) module, which is equipped with the channel-wise co-attention, spatial-wise co-attention, and spatial bias transformation blocks to identify 'what to look', 'where to look', and 'how to look' in the input test slice. By combining the three mechanisms, we can mine the interactive information of the intersection area and the disputed area between slices, and establish the feature connection between the target in slices with low similarity. We also propose a self-supervised contrastive learning framework, which transforms knowledge from the physical position to the embedding space to facilitate the self-supervised interactive embedding of the query and support slices. Comprehensive experiments on two large benchmarks demonstrate the superior capacity of the proposed approach when compared with the current alternatives and baseline models.
AB - One-shot organ segmentation (OS2) aims at segmenting the desired organ regions from the input medical imaging data with only one pre-annotated example as the reference. By using the minimal annotation data to facilitate organ segmentation, OS2 receives great attention in the medical image analysis community due to its weak requirement on human annotation. In OS2, one core issue is to explore the mutual information between the support (reference slice) and the query (test slice). Existing methods rely heavily on the similarity between slices, and additional slice allocation mechanisms need to be designed to reduce the impact of the similarity between slices on the segmentation performance. To address this issue, we build a novel support-query interactive embedding (SQIE) module, which is equipped with the channel-wise co-attention, spatial-wise co-attention, and spatial bias transformation blocks to identify 'what to look', 'where to look', and 'how to look' in the input test slice. By combining the three mechanisms, we can mine the interactive information of the intersection area and the disputed area between slices, and establish the feature connection between the target in slices with low similarity. We also propose a self-supervised contrastive learning framework, which transforms knowledge from the physical position to the embedding space to facilitate the self-supervised interactive embedding of the query and support slices. Comprehensive experiments on two large benchmarks demonstrate the superior capacity of the proposed approach when compared with the current alternatives and baseline models.
KW - co-attention mechanism
KW - contrastive learning
KW - interactive embedding
KW - medical image segmentation
KW - one-shot learning
KW - Self-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85171584348&partnerID=8YFLogxK
U2 - 10.1109/TBME.2023.3265033
DO - 10.1109/TBME.2023.3265033
M3 - 文章
C2 - 37695956
AN - SCOPUS:85171584348
SN - 0018-9294
VL - 70
SP - 2799
EP - 2808
JO - IEEE Transactions on Biomedical Engineering
JF - IEEE Transactions on Biomedical Engineering
IS - 10
ER -