TY - JOUR
T1 - Draw Sketch, Draw Flesh
T2 - Whole-Body Computed Tomography from Any X-Ray Views
AU - Pan, Yongsheng
AU - Ye, Yiwen
AU - Zhang, Yanning
AU - Xia, Yong
AU - Shen, Dinggang
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
PY - 2025/5
Y1 - 2025/5
N2 - Stereoscopic observation is a common foundation of medical image analysis and is generally achieved by 3D medical imaging based on settled scanners, such as CT and MRI, that are not as convenient as X-ray machines in some flexible scenarios. However, X-ray images can only provide perspective 2D observation and lack view in the third dimension. If 3D information can be deduced from X-ray images, it would broaden the application of X-ray machines. Focus on the above objective, this paper dedicates to the generation of pseudo 3D CT scans from non-parallel 2D perspective X-ray (PXR) views and proposes the Draw Sketch and Draw Flesh (DSDF) framework to first roughly predict the tissue distribution (Sketch) from PXR views and then render the tissue details (Flesh) from the tissue distribution and PXR views. Different from previous studies that focus only on partial locations, e.g., chest or neck, this study theoretically investigates the feasibility of head-to-leg reconstruction, i.e., generally applicable to any body parts. Experiments on 559 whole-body samples from 4 cohorts suggest that our DSDF can reconstruct more reasonable pseudo CT images than state-of-the-art methods and achieve promising results in both visualization and various downstream tasks. The source code and well-trained models are available a https://github.com/YongshengPan/WholeBodyXraytoCT.
AB - Stereoscopic observation is a common foundation of medical image analysis and is generally achieved by 3D medical imaging based on settled scanners, such as CT and MRI, that are not as convenient as X-ray machines in some flexible scenarios. However, X-ray images can only provide perspective 2D observation and lack view in the third dimension. If 3D information can be deduced from X-ray images, it would broaden the application of X-ray machines. Focus on the above objective, this paper dedicates to the generation of pseudo 3D CT scans from non-parallel 2D perspective X-ray (PXR) views and proposes the Draw Sketch and Draw Flesh (DSDF) framework to first roughly predict the tissue distribution (Sketch) from PXR views and then render the tissue details (Flesh) from the tissue distribution and PXR views. Different from previous studies that focus only on partial locations, e.g., chest or neck, this study theoretically investigates the feasibility of head-to-leg reconstruction, i.e., generally applicable to any body parts. Experiments on 559 whole-body samples from 4 cohorts suggest that our DSDF can reconstruct more reasonable pseudo CT images than state-of-the-art methods and achieve promising results in both visualization and various downstream tasks. The source code and well-trained models are available a https://github.com/YongshengPan/WholeBodyXraytoCT.
KW - CT reconstruction
KW - Generative adversarial network
KW - Image synthesis
KW - X-ray images
UR - http://www.scopus.com/inward/record.url?scp=105003154010&partnerID=8YFLogxK
U2 - 10.1007/s11263-024-02286-2
DO - 10.1007/s11263-024-02286-2
M3 - 文章
AN - SCOPUS:105003154010
SN - 0920-5691
VL - 133
SP - 2505
EP - 2526
JO - International Journal of Computer Vision
JF - International Journal of Computer Vision
IS - 5
M1 - 106615
ER -