X. Yang1, S. Pan2, C. W. Chang1, J. Peng2, R. L. J. Qiu1, T. Wang3, T. Liu4, J. R. Roper1, H. Al-Hallaq1, and Z. Tian5; 1Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, 2Department of Radiation Oncology, Emory University, Atlanta, GA, 3Memorial Sloan Kettering Cancer Center, New York, NY, 4Icahn School of Medicine at Mount Sinai, Department of Radiation Oncology, New York, NY, 5The University of Chicago, Chicago, IL
Purpose/Objective(s): The unpredictable motion of thoracic tumors during stereotactic body radiation therapy (SBRT) may result in a notable discrepancy between the delivered and the planned radiation doses. While adaptive RT can mitigate dosimetric errors by adjusting subsequent treatment fractions, accurate assessment of the delivered dose is hindered by the lack of real-time, in-treatment 3D patient anatomy data. This underscores the critical need for real-time volumetric imaging capabilities, which are currently absent on Linac systems. This study aims to address this gap by developing a patient-specific deep learning (DL) framework to generate instantaneous volumetric cone beam CT (CBCT) images from single 2D on-board kV X-ray projection at any gantry angle, which will facilitate real-time 3D tumor motion tracking during RT delivery and enable offline post-RT dose verification. Materials/
Methods: We propose a patient-specific cycle-diffusion model aimed at generating volumetric images from single-view 2D projections at arbitrary angles, utilizing prior information from the patients 4DCT images acquired during simulation. This model consists of two integral components: a projection-diffusion module responsible for synthesizing comprehensive full-view projections, and a CBCT-diffusion module tasked with generating volumetric images. We introduce an innovative cycle-domain geometric-integrated strategy with cone-beam geometric constraints to foster synergistic guidance between the projection- and CBCT-diffusion modules, thereby enhancing the accuracy of the generated images. To assess the performance of our model, we conducted a retrospective study with 20 lung cancer SBRT patients who underwent 4DCT scanning. Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM), were employed to quantify the accuracy of generated CBCTs. Additionally, we compared our method with state-of-the-art methods, such as Vnet and conditional generative adversarial network (CGAN). Results: The results showcased outstanding performance of our model in producing high-fidelity volumetric images from any single 2D projection, at arbitrary gantry angles. Specifically, the mean MAE, PSNR and SSIM were 40.24±9.18 HU, 29.87±2.15 dB and 0.86±0.04 for Vnet, 54.38±7.07 HU, 27.66±1.44 dB and 0.79±0.05 for CGAN, and 31.75±8.84 HU, 33.58±2.06 dB and 0.91±0.05 for the proposed method. Statistical analyses indicate that our approach significantly outperforms the other two methods. Conclusion: We developed a DL framework to reconstruct real-time volumetric images during treatment from any single 2D x-ray projection. This advancement holds significant promise for enhancing lung SBRT outcomes by minimizing targeting uncertainty, enhancing RT precision, and enabling dose-guided adaptive lung SBRT. Future studies will concentrate on evaluating its effectiveness in tumor tracking and dose verification.