TY - JOUR
T1 - Multi-modal sequence learning for Alzheimer's disease progression prediction with incomplete variable-length longitudinal data
AU - Xu, Lei
AU - Wu, Hui
AU - He, Chunming
AU - Wang, Jun
AU - Zhang, Changqing
AU - Nie, Feiping
AU - Chen, Lei
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/11
Y1 - 2022/11
N2 - Alzheimer's disease (AD) is a neurodegenerative disorder with a long prodromal phase. Predicting AD progression will clinically help improve diagnosis and empower sufferers in taking proactive care. However, most existing methods only target individuals with a fixed number of historical visits, and only predict the cognitive scores once at a fixed time horizon in the future, which cannot meet practical requirements. In this study, we consider a flexible yet more challenging scenario in which individuals may suffer from the (arbitrary) modality-missing issue, as well as the number of individuals’ historical visits and the length of target score trajectories being not prespecified. To address this problem, a multi-modal sequence learning framework, highlighted by deep latent representation collaborated sequence learning strategy, is proposed to flexibly handle the incomplete variable-length longitudinal multi-modal data. Specifically, the proposed framework first employs a deep multi-modality fusion module that automatically captures complementary information for each individual with incomplete multi-modality data. A comprehensive representation is thus learned and fed into a sequence learning module to model AD progression. In addition, both the multi-modality fusion module and sequence learning module are collaboratively trained to further promote the performance of AD progression prediction. Experimental results on Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset validate the superiority of our method.
AB - Alzheimer's disease (AD) is a neurodegenerative disorder with a long prodromal phase. Predicting AD progression will clinically help improve diagnosis and empower sufferers in taking proactive care. However, most existing methods only target individuals with a fixed number of historical visits, and only predict the cognitive scores once at a fixed time horizon in the future, which cannot meet practical requirements. In this study, we consider a flexible yet more challenging scenario in which individuals may suffer from the (arbitrary) modality-missing issue, as well as the number of individuals’ historical visits and the length of target score trajectories being not prespecified. To address this problem, a multi-modal sequence learning framework, highlighted by deep latent representation collaborated sequence learning strategy, is proposed to flexibly handle the incomplete variable-length longitudinal multi-modal data. Specifically, the proposed framework first employs a deep multi-modality fusion module that automatically captures complementary information for each individual with incomplete multi-modality data. A comprehensive representation is thus learned and fed into a sequence learning module to model AD progression. In addition, both the multi-modality fusion module and sequence learning module are collaboratively trained to further promote the performance of AD progression prediction. Experimental results on Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset validate the superiority of our method.
KW - Alzheimer's disease
KW - Disease progression prediction
KW - Latent representation learning
KW - Missing modality
KW - Multi-modal learning
KW - Sequence learning
UR - http://www.scopus.com/inward/record.url?scp=85139288400&partnerID=8YFLogxK
U2 - 10.1016/j.media.2022.102643
DO - 10.1016/j.media.2022.102643
M3 - 文章
C2 - 36208572
AN - SCOPUS:85139288400
SN - 1361-8415
VL - 82
JO - Medical Image Analysis
JF - Medical Image Analysis
M1 - 102643
ER -