Abstract
Accurate segmentation of organs or lesions in medical images plays a significant role in clinical applications such as clinical diagnosis. However, learning segmentation models require a large number of annotated samples. This paper focuses on the semi-supervised medical image segmentation to relieve the dependence on labeled samples. A widely used semi-supervised learning method is temporally averaging a student model as the teacher model. However, it accumulates the incorrect knowledge of the student model as well. To address the above issue, we propose an interactive dual-model learning algorithm. Aiming to prevent the propagation and accumulation of error knowledge, we devise a specific mechanism for judging and measuring the instability of network predictions. Only pixels with relatively more stable predictions in one model are employed to supervise the other model. Extensive experiments on three datasets including cardiac structure segmentation, liver tumor segmentation, and brain tumor segmentation, demonstrate that the proposed method outperforms the state-of-the-art semi-supervised methods. When 30% of annotations are available, the Dice similarity coefficient (DSC) metric of our method reaches 89.13%, 94.15% and 87.02% respectively on the above three datasets.
Translated title of the contribution | Interactive Dual-model Learning for Semi-supervised Medical Image Segmentation |
---|---|
Original language | Chinese (Traditional) |
Pages (from-to) | 805-819 |
Number of pages | 15 |
Journal | Zidonghua Xuebao/Acta Automatica Sinica |
Volume | 49 |
Issue number | 4 |
DOIs | |
State | Published - Apr 2023 |