TY - JOUR
T1 - Hi-Net
T2 - Hybrid-Fusion Network for Multi-Modal MR Image Synthesis
AU - Zhou, Tao
AU - Fu, Huazhu
AU - Chen, Geng
AU - Shen, Jianbing
AU - Shao, Ling
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2020/9
Y1 - 2020/9
N2 - Magnetic resonance imaging (MRI) is a widely used neuroimaging technique that can provide images of different contrasts (i.e., modalities). Fusing this multi-modal data has proven particularly effective for boosting model performance in many tasks. However, due to poor data quality and frequent patient dropout, collecting all modalities for every patient remains a challenge. Medical image synthesis has been proposed as an effective solution, where any missing modalities are synthesized from the existing ones. In this paper, we propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis, which learns a mapping from multi-modal source images (i.e., existing modalities) to target images (i.e., missing modalities). In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality, and a fusion network is employed to learn the common latent representation of multi-modal data. Then, a multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality, acting as a generator to synthesize the target images. Moreover, a layer-wise multi-modal fusion strategy effectively exploits the correlations among multiple modalities, where a Mixed Fusion Block (MFB) is proposed to adaptively weight different fusion strategies. Extensive experiments demonstrate the proposed model outperforms other state-of-the-art medical image synthesis methods.
AB - Magnetic resonance imaging (MRI) is a widely used neuroimaging technique that can provide images of different contrasts (i.e., modalities). Fusing this multi-modal data has proven particularly effective for boosting model performance in many tasks. However, due to poor data quality and frequent patient dropout, collecting all modalities for every patient remains a challenge. Medical image synthesis has been proposed as an effective solution, where any missing modalities are synthesized from the existing ones. In this paper, we propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis, which learns a mapping from multi-modal source images (i.e., existing modalities) to target images (i.e., missing modalities). In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality, and a fusion network is employed to learn the common latent representation of multi-modal data. Then, a multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality, acting as a generator to synthesize the target images. Moreover, a layer-wise multi-modal fusion strategy effectively exploits the correlations among multiple modalities, where a Mixed Fusion Block (MFB) is proposed to adaptively weight different fusion strategies. Extensive experiments demonstrate the proposed model outperforms other state-of-the-art medical image synthesis methods.
KW - Magnetic resonance imaging (MRI)
KW - hybrid-fusion network
KW - latent representation
KW - medical image synthesis
KW - multi-modal data
UR - http://www.scopus.com/inward/record.url?scp=85082856083&partnerID=8YFLogxK
U2 - 10.1109/TMI.2020.2975344
DO - 10.1109/TMI.2020.2975344
M3 - 文章
C2 - 32086202
AN - SCOPUS:85082856083
SN - 0278-0062
VL - 39
SP - 2772
EP - 2781
JO - IEEE Transactions on Medical Imaging
JF - IEEE Transactions on Medical Imaging
IS - 9
M1 - 9004544
ER -