TY - JOUR
T1 - Disease-Image-Specific Learning for Diagnosis-Oriented Neuroimage Synthesis With Incomplete Multi-Modality Data
AU - Pan, Yongsheng
AU - Liu, Mingxia
AU - Xia, Yong
AU - Shen, Dinggang
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2022/10/1
Y1 - 2022/10/1
N2 - Incomplete data problem is commonly existing in classification tasks with multi-source data, particularly the disease diagnosis with multi-modality neuroimages, to track which, some methods have been proposed to utilize all available subjects by imputing missing neuroimages. However, these methods usually treat image synthesis and disease diagnosis as two standalone tasks, thus ignoring the specificity conveyed in different modalities, i.e., different modalities may highlight different disease-relevant regions in the brain. To this end, we propose a disease-image-specific deep learning (DSDL) framework for joint neuroimage synthesis and disease diagnosis using incomplete multi-modality neuroimages. Specifically, with each whole-brain scan as input, we first design a Disease-image-Specific Network (DSNet) with a spatial cosine module to implicitly model the disease-image specificity. We then develop a Feature-consistency Generative Adversarial Network (FGAN) to impute missing neuroimages, where feature maps (generated by DSNet) of a synthetic image and its respective real image are encouraged to be consistent while preserving the disease-image-specific information. Since our FGAN is correlated with DSNet, missing neuroimages can be synthesized in a diagnosis-oriented manner. Experimental results on three datasets suggest that our method can not only generate reasonable neuroimages, but also achieve state-of-the-art performance in both tasks of Alzheimer's disease identification and mild cognitive impairment conversion prediction.
AB - Incomplete data problem is commonly existing in classification tasks with multi-source data, particularly the disease diagnosis with multi-modality neuroimages, to track which, some methods have been proposed to utilize all available subjects by imputing missing neuroimages. However, these methods usually treat image synthesis and disease diagnosis as two standalone tasks, thus ignoring the specificity conveyed in different modalities, i.e., different modalities may highlight different disease-relevant regions in the brain. To this end, we propose a disease-image-specific deep learning (DSDL) framework for joint neuroimage synthesis and disease diagnosis using incomplete multi-modality neuroimages. Specifically, with each whole-brain scan as input, we first design a Disease-image-Specific Network (DSNet) with a spatial cosine module to implicitly model the disease-image specificity. We then develop a Feature-consistency Generative Adversarial Network (FGAN) to impute missing neuroimages, where feature maps (generated by DSNet) of a synthetic image and its respective real image are encouraged to be consistent while preserving the disease-image-specific information. Since our FGAN is correlated with DSNet, missing neuroimages can be synthesized in a diagnosis-oriented manner. Experimental results on three datasets suggest that our method can not only generate reasonable neuroimages, but also achieve state-of-the-art performance in both tasks of Alzheimer's disease identification and mild cognitive impairment conversion prediction.
KW - Multi-modality neuroimaging
KW - brain disease diagnosis
KW - generative adversarial network
KW - missing image synthesis
UR - http://www.scopus.com/inward/record.url?scp=85112447183&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2021.3091214
DO - 10.1109/TPAMI.2021.3091214
M3 - 文章
C2 - 34156939
AN - SCOPUS:85112447183
SN - 0162-8828
VL - 44
SP - 6839
EP - 6853
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 10
ER -