L. Qiu1, L. Shen2, L. Liu1, and L. Xing1; 1Department of Radiation Oncology, Stanford University, Stanford, CA, 2Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI
Purpose/Objective(s): Longitudinal studies pose a significant challenge in precision medicine, particularly in the context of monitoring tumor changes and treatment responses during and after radiation therapy using imaging tools. Addressing the need to capture and predict non-rigid spatial and temporal anatomical changes based on patient-specific discrete-sampling longitudinal image sequences is a complex and clinically vital issue. In this study, we introduce a pioneering approach, Patient-specific Neural Representation learning with Prior embedding, to address this challenge. Our approach offers a solution to track non-rigid anatomical changes, particularly in tumor regions, within individual patients throughout the therapeutic journey. Materials/
Methods: We have devised a model to effectively capture temporal changes within the patient-specific image sequence, employing implicit neural representation (INR) learning. The model leverages an INR network designated as the Reference Network to embed the prior knowledge at the reference time point. Subsequently, a Deformation Network, another INR network, is trained with longitudinal image series, except the target interpolation image, to learn a spatial-temporal continuous deformation function. This function enables the prediction of the deformation field at any target time point relative to the reference time point, facilitating automatic monitoring of anatomical changes in the tumor progression. To assess the effectiveness, we utilize the Dice score and the number of voxels with a non-positive Jacobian determinant to estimate the time-dependent deformable registration and image interpolation in patient-specific study. Results: Preliminary experiments have been conducted on a longitudinal lung cancer imaging dataset comprising 4 patients, each with 5 CT scans. We select the first CT scan as the reference time point and the third CT image as the target time point. Our well-trained Reference Network can effectively embed the prior image as a continuous function parametrized by network weights. Based on the Reference Network, we can learn a Deformation Network with spatial-temporal neural representations to predict the spatial-continuous deformation field at any continuous time point. This enabled the estimation of the target image/segmentation by deforming the reference image/segmentation, showcasing promising Dice score results of 0.781 for time-dependent deformable registration and 0.701 for image interpolation task, based on the generation of smooth deformation fields devoid of folding voxels. Conclusion: We have introduced a novel personalized disease progression monitoring technique capable of learning a spatial-temporal continuous deformation function to effectively monitor subtle yet significant anatomical changes within tumor regions, leveraging prior knowledge. Our methods feasibility has been demonstrated through preliminary experimentation on a longitudinal lung cancer dataset.