TY - JOUR
T1 - Fusing texture, shape and deep model-learned information at decision level for automated classification of lung nodules on chest CT
AU - Xie, Yutong
AU - Zhang, Jianpeng
AU - Xia, Yong
AU - Fulham, Michael
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 2017 Elsevier B.V.
PY - 2018/7
Y1 - 2018/7
N2 - The separation of malignant from benign lung nodules on chest computed tomography (CT) is important for the early detection of lung cancer, since early detection and management offer the best chance for cure. Although deep learning methods have recently produced a marked improvement in image classification there are still challenges as these methods contain myriad parameters and require large-scale training sets that are not usually available for most routine medical imaging studies. In this paper, we propose an algorithm for lung nodule classification that fuses the texture, shape and deep model-learned information (Fuse-TSD) at the decision level. This algorithm employs a gray level co-occurrence matrix (GLCM)-based texture descriptor, a Fourier shape descriptor to characterize the heterogeneity of nodules and a deep convolutional neural network (DCNN) to automatically learn the feature representation of nodules on a slice-by-slice basis. It trains an AdaBoosted back propagation neural network (BPNN) using each feature type and fuses the decisions made by three classifiers to differentiate nodules. We evaluated this algorithm against three approaches on the LIDC-IDRI dataset. When the nodules with a composite malignancy rate 3 were discarded, regarded as benign or regarded as malignant, our Fuse-TSD algorithm achieved an AUC of 96.65%, 94.45% and 81.24%, respectively, which was substantially higher than the AUC obtained by other approaches.
AB - The separation of malignant from benign lung nodules on chest computed tomography (CT) is important for the early detection of lung cancer, since early detection and management offer the best chance for cure. Although deep learning methods have recently produced a marked improvement in image classification there are still challenges as these methods contain myriad parameters and require large-scale training sets that are not usually available for most routine medical imaging studies. In this paper, we propose an algorithm for lung nodule classification that fuses the texture, shape and deep model-learned information (Fuse-TSD) at the decision level. This algorithm employs a gray level co-occurrence matrix (GLCM)-based texture descriptor, a Fourier shape descriptor to characterize the heterogeneity of nodules and a deep convolutional neural network (DCNN) to automatically learn the feature representation of nodules on a slice-by-slice basis. It trains an AdaBoosted back propagation neural network (BPNN) using each feature type and fuses the decisions made by three classifiers to differentiate nodules. We evaluated this algorithm against three approaches on the LIDC-IDRI dataset. When the nodules with a composite malignancy rate 3 were discarded, regarded as benign or regarded as malignant, our Fuse-TSD algorithm achieved an AUC of 96.65%, 94.45% and 81.24%, respectively, which was substantially higher than the AUC obtained by other approaches.
KW - AdaBoost, information fusion
KW - Back propagation neural network (BPNN)
KW - Chest CT
KW - Deep convolutional neural network (DCNN)
KW - Lung nodule classification
UR - http://www.scopus.com/inward/record.url?scp=85032367648&partnerID=8YFLogxK
U2 - 10.1016/j.inffus.2017.10.005
DO - 10.1016/j.inffus.2017.10.005
M3 - 文章
AN - SCOPUS:85032367648
SN - 1566-2535
VL - 42
SP - 102
EP - 110
JO - Information Fusion
JF - Information Fusion
ER -