TY - JOUR
T1 - Dense Prediction and Local Fusion of Superpixels
T2 - A Framework for Breast Anatomy Segmentation in Ultrasound Image with Scarce Data
AU - Huang, Qinghua
AU - Miao, Zhaoji
AU - Zhou, Shichong
AU - Chang, Cai
AU - Li, Xuelong
N1 - Publisher Copyright:
© 1963-2012 IEEE.
PY - 2021
Y1 - 2021
N2 - Segmentation of the breast ultrasound (BUS) image is an important step for subsequent assessment and diagnosis of breast lesions. Recently, Deep-learning-based methods have achieved satisfactory performance in many computer vision tasks, especially in medical image segmentation. Nevertheless, those methods always require a large number of pixel-wise labeled data that is expensive in medical practices. In this study, we propose a new segmentation method by dense prediction and local fusion of superpixels for breast anatomy with scarce labeled data. First, the proposed method generates superpixels from the BUS image enhanced by histogram equalization, a bilateral filter, and a pyramid mean shift filter. Second, using a convolutional neural network (CNN) and distance metric learning-based classifier, the superpixels are projected onto the embedding space and then classified by calculating the distance between superpixels' embeddings and the centers of categories. By using superpixels, we can generate a large number of training samples from each BUS image. Therefore, the problem of the scarcity of labeled data can be better solved. To avoid the misclassification of the superpixels, $K$-nearest neighbor (KNN) is used to reclassify the superpixels within every local region based on the spatial relationships among them. Fivefold cross-validation was taken and the experimental results show that our method outperforms several often used deep-learning methods under the condition of the absence of a large number of labeled data (48 BUS images for training and 12 BUS images for testing).
AB - Segmentation of the breast ultrasound (BUS) image is an important step for subsequent assessment and diagnosis of breast lesions. Recently, Deep-learning-based methods have achieved satisfactory performance in many computer vision tasks, especially in medical image segmentation. Nevertheless, those methods always require a large number of pixel-wise labeled data that is expensive in medical practices. In this study, we propose a new segmentation method by dense prediction and local fusion of superpixels for breast anatomy with scarce labeled data. First, the proposed method generates superpixels from the BUS image enhanced by histogram equalization, a bilateral filter, and a pyramid mean shift filter. Second, using a convolutional neural network (CNN) and distance metric learning-based classifier, the superpixels are projected onto the embedding space and then classified by calculating the distance between superpixels' embeddings and the centers of categories. By using superpixels, we can generate a large number of training samples from each BUS image. Therefore, the problem of the scarcity of labeled data can be better solved. To avoid the misclassification of the superpixels, $K$-nearest neighbor (KNN) is used to reclassify the superpixels within every local region based on the spatial relationships among them. Fivefold cross-validation was taken and the experimental results show that our method outperforms several often used deep-learning methods under the condition of the absence of a large number of labeled data (48 BUS images for training and 12 BUS images for testing).
KW - Breast anatomy segmentation
KW - breast cancer
KW - distance metric learning (DML)
KW - K-nearest neighbor (KNN)
KW - superpixels
UR - http://www.scopus.com/inward/record.url?scp=85111066721&partnerID=8YFLogxK
U2 - 10.1109/TIM.2021.3088421
DO - 10.1109/TIM.2021.3088421
M3 - 文章
AN - SCOPUS:85111066721
SN - 0018-9456
VL - 70
JO - IEEE Transactions on Instrumentation and Measurement
JF - IEEE Transactions on Instrumentation and Measurement
M1 - 9452057
ER -