TY - JOUR
T1 - Balanced feature fusion collaborative training for semi-supervised medical image segmentation
AU - Zhao, Zhongda
AU - Wang, Haiyan
AU - Lei, Tao
AU - Wang, Xuan
AU - Shen, Xiaohong
AU - Yao, Haiyang
N1 - Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2025/1
Y1 - 2025/1
N2 - Collaborative learning is a fundamental component of consistency learning. It has been extensively utilized in semi-supervised medical image segmentation, primarily based on the learning of multiple models from each other. However, existing semi-supervised collaborative image segmentation methods face two primary issues. Firstly, these methods fail to fully leverage the hidden knowledge within the models during a knowledge exchange, resulting in inefficient knowledge sharing and limited generalization capabilities. To address this, we propose a novel approach, termed ‘fusion teacher’, which merges the knowledge of two models at the feature-level. This enhances the efficiency of knowledge exchange between models and generates more accurate pseudo-labels for consistency learning. Secondly, the initial and intermediate stages of collaborative learning are hindered by a significant performance gap between the fusion teacher and student models, impairs effective knowledge transfer. Our approach advocates a gradual increase in the dropout rate. This strategy enhances the transfer efficiency of knowledge from a fusion teacher to a student model. To demonstrate the efficacy of our method, we conduct experiments on the ISIC, ACDC, and AbdomenCT-1K datasets. Our approach achieves Dice scores of 87.4%, 84.8%, and 84.5%, respectively, with 10% labelled data. Compared with the current state-of-the-art (SOTA) methods, our method demonstrates strong competitiveness.
AB - Collaborative learning is a fundamental component of consistency learning. It has been extensively utilized in semi-supervised medical image segmentation, primarily based on the learning of multiple models from each other. However, existing semi-supervised collaborative image segmentation methods face two primary issues. Firstly, these methods fail to fully leverage the hidden knowledge within the models during a knowledge exchange, resulting in inefficient knowledge sharing and limited generalization capabilities. To address this, we propose a novel approach, termed ‘fusion teacher’, which merges the knowledge of two models at the feature-level. This enhances the efficiency of knowledge exchange between models and generates more accurate pseudo-labels for consistency learning. Secondly, the initial and intermediate stages of collaborative learning are hindered by a significant performance gap between the fusion teacher and student models, impairs effective knowledge transfer. Our approach advocates a gradual increase in the dropout rate. This strategy enhances the transfer efficiency of knowledge from a fusion teacher to a student model. To demonstrate the efficacy of our method, we conduct experiments on the ISIC, ACDC, and AbdomenCT-1K datasets. Our approach achieves Dice scores of 87.4%, 84.8%, and 84.5%, respectively, with 10% labelled data. Compared with the current state-of-the-art (SOTA) methods, our method demonstrates strong competitiveness.
KW - Collaborative training
KW - Medical image segmentation
KW - Semi-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85200814911&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2024.110856
DO - 10.1016/j.patcog.2024.110856
M3 - 文章
AN - SCOPUS:85200814911
SN - 0031-3203
VL - 157
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 110856
ER -