TY - GEN
T1 - Learning-based multimodal image registration for prostate cancer radiation therapy
AU - Cao, Xiaohuan
AU - Gao, Yaozong
AU - Yang, Jianhua
AU - Wu, Guorong
AU - Shen, Dinggang
N1 - Publisher Copyright:
© Springer International Publishing AG 2016.
PY - 2016
Y1 - 2016
N2 - Computed tomography (CT) is widely used for dose planning in the radiotherapy of prostate cancer. However,CT has low tissue contrast,thus making manual contouring difficult. In contrast,magnetic resonance (MR) image provides high tissue contrast and is thus ideal for manual contouring. If MR image can be registered to CT image of the same patient,the contouring accuracy of CT could be substantially improved,which could eventually lead to high treatment efficacy. In this paper,we propose a learning-based approach for multimodal image registration. First,to fill the appearance gap between modalities,a structured random forest with auto-context model is learnt to synthesize MRI from CT and vice versa. Then,MRI-to-CT registration is steered in a dual manner of registering images with same appearances,i.e.,(1) registering the synthesized CT with CT,and (2) also registering MRI with the synthesized MRI. Next,a dual-core deformation fusion framework is developed to iteratively and effectively combine these two registration results. Experiments on pelvic CT and MR images have shown the improved registration performance by our proposed method,compared with the existing nonlearning based registration methods.
AB - Computed tomography (CT) is widely used for dose planning in the radiotherapy of prostate cancer. However,CT has low tissue contrast,thus making manual contouring difficult. In contrast,magnetic resonance (MR) image provides high tissue contrast and is thus ideal for manual contouring. If MR image can be registered to CT image of the same patient,the contouring accuracy of CT could be substantially improved,which could eventually lead to high treatment efficacy. In this paper,we propose a learning-based approach for multimodal image registration. First,to fill the appearance gap between modalities,a structured random forest with auto-context model is learnt to synthesize MRI from CT and vice versa. Then,MRI-to-CT registration is steered in a dual manner of registering images with same appearances,i.e.,(1) registering the synthesized CT with CT,and (2) also registering MRI with the synthesized MRI. Next,a dual-core deformation fusion framework is developed to iteratively and effectively combine these two registration results. Experiments on pelvic CT and MR images have shown the improved registration performance by our proposed method,compared with the existing nonlearning based registration methods.
UR - http://www.scopus.com/inward/record.url?scp=84996560143&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-46726-9_1
DO - 10.1007/978-3-319-46726-9_1
M3 - 会议稿件
C2 - 28975161
AN - SCOPUS:84996560143
SN - 9783319467252
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 1
EP - 9
BT - Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016 - 19th International Conference, Proceedings
A2 - Joskowicz, Leo
A2 - Sabuncu, Mert R.
A2 - Wells, William
A2 - Unal, Gozde
A2 - Ourselin, Sebastian
PB - Springer Verlag
ER -