Northwell Health Center for Advanced Medicine New York, New York
C. L. Teng1, A. S. Bhullar2, P. Jermain3, D. Jordon4, R. Nawfel5, P. Patel6, R. Sean7, M. Shang8, and D. H. Wu9; 1Mount Sinai Health System, New York, NY, 2Food and Drug Administration, Silver Spring, MD, 3Medstar Georgetown University Hospital, Washington, DC, 4University Hospitals Cleveland Medical Center, Cleveland, OH, 5Brigham & Womens Hospital, Boston, MA, 6Houston Methodist Hospital, Houston, TX, 7University of Texas at Houston Medical School, Houston, TX, 8Georgetown University Hospital, Washington, DC, 9University of Oklahoma Health Sciences Center, Oklahoma City, OK
Purpose/Objective(s):No new technology arrives without complications. Some complications are technical, while others are related to using technology for decision-making. Artificial Intelligence (AI) holds both immense potential and pitfalls. Responsible AI could make patient care more efficient, safe, and effective. Irresponsible AI could exacerbate discrimination, bias, and disinformation. The objective of this study is to formulate an analysis rubric that cultivates Responsible AI for Radiation Oncology. Materials/
Methods: Radiation Oncology is not the first field to adopt AI. Other fields such as finance, law, and advertising have applied AI extensively. An analysis rubric has been proposed to offer structured guidelines for data science (Spector et al., 2023). For this work, we adapt the rubric to analyze AI applications in Radiation Oncology in the following domains: (1) Tractable Data, (2) Technical Approach, (3) Dependability, (4) Explanability, (5) Objectives, (6) Tolerance of Failures, and (7) Broader Impact. Results: Focusing on Auto-Segmentation (AS) for radiotherapy planning as a key example: (1) Tractable Data requires high-quality imaging data like CT scans or MRIs, along with expert annotations. Variability in imaging protocols, patient anatomy, and tumor characteristics necessitate a vast dataset for training. (2) Technical Approaches involve advanced AI and machine learning models. Since the choice of algorithm significantly impacts performance, generalizability remains challenging. Commissioning is hence mandatory. (3) Models may also be susceptible to attack or maladaptation that can lead to harm. Continuous monitoring is needed to ensure Dependability of the models against errors. (4) AS models are often black-box in nature and hence are less concerned about Explainability. However, this black box can hinder trust and adoption. (5) The Objectives of AS are to improve treatment accuracy and to reduce planning time. Ensuring that technology complements rather than replaces human expertise is crucial. (6) There is possibly low Tolerance of Failures in AS because small inaccuracies can have significant consequences in treatment outcomes. Hence, clinician oversight is critical. (7) AS has the Broader Impact to democratize access to high-quality radiation therapy, particularly in underserved regions. However, there are ethical concerns around bias and equity that the technology must not exacerbate existing health disparity. Conclusion: The study highlights the necessity of implementing a structured analysis rubric as the first critical step for the development and deployment of Responsible AI in Radiation Oncology.