-
Comp2Comp: Open-Source Software with FDA-Cleared Artificial Intelligence Algorithms for Computed Tomography Image Analysis
Authors:
Adrit Rao,
Malte Jensen,
Andrea T. Fisher,
Louis Blankemeier,
Pauline Berens,
Arash Fereydooni,
Seth Lirette,
Eren Alkan,
Felipe C. Kitamura,
Juan M. Zambrano Chaves,
Eduardo Reis,
Arjun Desai,
Marc H. Willis,
Jason Hom,
Andrew Johnston,
Leon Lenchik,
Robert D. Boutin,
Eduardo M. J. M. Farina,
Augusto S. Serpa,
Marcelo S. Takahashi,
Jordan Perchik,
Steven A. Rothenberg,
Jamie L. Schroeder,
Ross Filice,
Leonardo K. Bittencourt
, et al. (6 additional authors not shown)
Abstract:
Artificial intelligence allows automatic extraction of imaging biomarkers from already-acquired radiologic images. This paradigm of opportunistic imaging adds value to medical imaging without additional imaging costs or patient radiation exposure. However, many open-source image analysis solutions lack rigorous validation while commercial solutions lack transparency, leading to unexpected failures…
▽ More
Artificial intelligence allows automatic extraction of imaging biomarkers from already-acquired radiologic images. This paradigm of opportunistic imaging adds value to medical imaging without additional imaging costs or patient radiation exposure. However, many open-source image analysis solutions lack rigorous validation while commercial solutions lack transparency, leading to unexpected failures when deployed. Here, we report development and validation for two of the first fully open-sourced, FDA-510(k)-cleared deep learning pipelines to mitigate both challenges: Abdominal Aortic Quantification (AAQ) and Bone Mineral Density (BMD) estimation are both offered within the Comp2Comp package for opportunistic analysis of computed tomography scans. AAQ segments the abdominal aorta to assess aneurysm size; BMD segments vertebral bodies to estimate trabecular bone density and osteoporosis risk. AAQ-derived maximal aortic diameters were compared against radiologist ground-truth measurements on 258 patient scans enriched for abdominal aortic aneurysms from four external institutions. BMD binary classifications (low vs. normal bone density) were compared against concurrent DXA scan ground truths obtained on 371 patient scans from four external institutions. AAQ had an overall mean absolute error of 1.57 mm (95% CI 1.38-1.80 mm). BMD had a sensitivity of 81.0% (95% CI 74.0-86.8%) and specificity of 78.4% (95% CI 72.3-83.7%). Comp2Comp AAQ and BMD demonstrated sufficient accuracy for clinical use. Open-sourcing these algorithms improves transparency of typically opaque FDA clearance processes, allows hospitals to test the algorithms before cumbersome clinical pilots, and provides researchers with best-in-class methods.
△ Less
Submitted 10 February, 2026;
originally announced February 2026.
-
Evaluating the Clinical Impact of Generative Inpainting on Bone Age Estimation
Authors:
Felipe Akio Matsuoka,
Eduardo Moreno J. M. Farina,
Augusto Sarquis Serpa,
Soraya Monteiro,
Rodrigo Ragazzini,
Nitamar Abdala,
Marcelo Straus Takahashi,
Felipe Campos Kitamura
Abstract:
Generative foundation models can remove visual artifacts through realistic image inpainting, but their impact on medical AI performance remains uncertain. Pediatric hand radiographs often contain non-anatomical markers, and it is unclear whether inpainting these regions preserves features needed for bone age and gender prediction. To evaluate the clinical reliability of generative model-based inpa…
▽ More
Generative foundation models can remove visual artifacts through realistic image inpainting, but their impact on medical AI performance remains uncertain. Pediatric hand radiographs often contain non-anatomical markers, and it is unclear whether inpainting these regions preserves features needed for bone age and gender prediction. To evaluate the clinical reliability of generative model-based inpainting for artifact removal, we used the RSNA Bone Age Challenge dataset, selecting 200 original radiographs and generating 600 inpainted versions with gpt-image-1 using natural language prompts to target non-anatomical artifacts. Downstream performance was assessed with deep learning ensembles for bone age estimation and gender classification, using mean absolute error (MAE) and area under the ROC curve (AUC) as metrics, and pixel intensity distributions to detect structural alterations. Inpainting markedly degraded model performance: bone age MAE increased from 6.26 to 30.11 months, and gender classification AUC decreased from 0.955 to 0.704. Inpainted images displayed pixel-intensity shifts and inconsistencies, indicating structural modifications not corrected by simple calibration. These findings show that, although visually realistic, foundation model-based inpainting can obscure subtle but clinically relevant features and introduce latent bias even when edits are confined to non-diagnostic regions, underscoring the need for rigorous, task-specific validation before integrating such generative tools into clinical AI workflows.
△ Less
Submitted 28 November, 2025;
originally announced November 2025.
-
PARROT: An Open Multilingual Radiology Reports Dataset
Authors:
Bastien Le Guellec,
Kokou Adambounou,
Lisa C Adams,
Thibault Agripnidis,
Sung Soo Ahn,
Radhia Ait Chalal,
Tugba Akinci D Antonoli,
Philippe Amouyel,
Henrik Andersson,
Raphael Bentegeac,
Claudio Benzoni,
Antonino Andrea Blandino,
Felix Busch,
Elif Can,
Riccardo Cau,
Armando Ugo Cavallo,
Christelle Chavihot,
Erwin Chiquete,
Renato Cuocolo,
Eugen Divjak,
Gordana Ivanac,
Barbara Dziadkowiec Macek,
Armel Elogne,
Salvatore Claudio Fanni,
Carlos Ferrarotti
, et al. (63 additional authors not shown)
Abstract:
Rationale and Objectives: To develop and validate PARROT (Polyglottal Annotated Radiology Reports for Open Testing), a large, multicentric, open-access dataset of fictional radiology reports spanning multiple languages for testing natural language processing applications in radiology. Materials and Methods: From May to September 2024, radiologists were invited to contribute fictional radiology rep…
▽ More
Rationale and Objectives: To develop and validate PARROT (Polyglottal Annotated Radiology Reports for Open Testing), a large, multicentric, open-access dataset of fictional radiology reports spanning multiple languages for testing natural language processing applications in radiology. Materials and Methods: From May to September 2024, radiologists were invited to contribute fictional radiology reports following their standard reporting practices. Contributors provided at least 20 reports with associated metadata including anatomical region, imaging modality, clinical context, and for non-English reports, English translations. All reports were assigned ICD-10 codes. A human vs. AI report differentiation study was conducted with 154 participants (radiologists, healthcare professionals, and non-healthcare professionals) assessing whether reports were human-authored or AI-generated. Results: The dataset comprises 2,658 radiology reports from 76 authors across 21 countries and 13 languages. Reports cover multiple imaging modalities (CT: 36.1%, MRI: 22.8%, radiography: 19.0%, ultrasound: 16.8%) and anatomical regions, with chest (19.9%), abdomen (18.6%), head (17.3%), and pelvis (14.1%) being most prevalent. In the differentiation study, participants achieved 53.9% accuracy (95% CI: 50.7%-57.1%) in distinguishing between human and AI-generated reports, with radiologists performing significantly better (56.9%, 95% CI: 53.3%-60.6%, p<0.05) than other groups. Conclusion: PARROT represents the largest open multilingual radiology report dataset, enabling development and validation of natural language processing applications across linguistic, geographic, and clinical boundaries without privacy constraints.
△ Less
Submitted 25 August, 2025; v1 submitted 25 July, 2025;
originally announced July 2025.
-
The RSNA Lumbar Degenerative Imaging Spine Classification (LumbarDISC) Dataset
Authors:
Tyler J. Richards,
Adam E. Flanders,
Errol Colak,
Luciano M. Prevedello,
Robyn L. Ball,
Felipe Kitamura,
John Mongan,
Maryam Vazirabad,
Hui-Ming Lin,
Anne Kendell,
Thanat Kanthawang,
Salita Angkurawaranon,
Emre Altinmakas,
Hakan Dogan,
Paulo Eduardo de Aguiar Kuriki,
Arjuna Somasundaram,
Christopher Ruston,
Deniz Bulja,
Naida Spahovic,
Jennifer Sommer,
Sirui Jiang,
Eduardo Moreno Judice de Mattos Farina,
Eduardo Caminha Nunes,
Michael Brassil,
Megan McNamara
, et al. (11 additional authors not shown)
Abstract:
The Radiological Society of North America (RSNA) Lumbar Degenerative Imaging Spine Classification (LumbarDISC) dataset is the largest publicly available dataset of adult MRI lumbar spine examinations annotated for degenerative changes. The dataset includes 2,697 patients with a total of 8,593 image series from 8 institutions across 6 countries and 5 continents. The dataset is available for free fo…
▽ More
The Radiological Society of North America (RSNA) Lumbar Degenerative Imaging Spine Classification (LumbarDISC) dataset is the largest publicly available dataset of adult MRI lumbar spine examinations annotated for degenerative changes. The dataset includes 2,697 patients with a total of 8,593 image series from 8 institutions across 6 countries and 5 continents. The dataset is available for free for non-commercial use via Kaggle and RSNA Medical Imaging Resource of AI (MIRA). The dataset was created for the RSNA 2024 Lumbar Spine Degenerative Classification competition where competitors developed deep learning models to grade degenerative changes in the lumbar spine. The degree of spinal canal, subarticular recess, and neural foraminal stenosis was graded at each intervertebral disc level in the lumbar spine. The images were annotated by expert volunteer neuroradiologists and musculoskeletal radiologists from the RSNA, American Society of Neuroradiology, and the American Society of Spine Radiology. This dataset aims to facilitate research and development in machine learning and lumbar spine imaging to lead to improved patient care and clinical efficiency.
△ Less
Submitted 10 June, 2025;
originally announced June 2025.
-
RIDGE: Reproducibility, Integrity, Dependability, Generalizability, and Efficiency Assessment of Medical Image Segmentation Models
Authors:
Farhad Maleki,
Linda Moy,
Reza Forghani,
Tapotosh Ghosh,
Katie Ovens,
Steve Langer,
Pouria Rouzrokh,
Bardia Khosravi,
Ali Ganjizadeh,
Daniel Warren,
Roxana Daneshjou,
Mana Moassefi,
Atlas Haddadi Avval,
Susan Sotardi,
Neil Tenenholtz,
Felipe Kitamura,
Timothy Kline
Abstract:
Deep learning techniques hold immense promise for advancing medical image analysis, particularly in tasks like image segmentation, where precise annotation of regions or volumes of interest within medical images is crucial but manually laborious and prone to interobserver and intraobserver biases. As such, deep learning approaches could provide automated solutions for such applications. However, t…
▽ More
Deep learning techniques hold immense promise for advancing medical image analysis, particularly in tasks like image segmentation, where precise annotation of regions or volumes of interest within medical images is crucial but manually laborious and prone to interobserver and intraobserver biases. As such, deep learning approaches could provide automated solutions for such applications. However, the potential of these techniques is often undermined by challenges in reproducibility and generalizability, which are key barriers to their clinical adoption. This paper introduces the RIDGE checklist, a comprehensive framework designed to assess the Reproducibility, Integrity, Dependability, Generalizability, and Efficiency of deep learning-based medical image segmentation models. The RIDGE checklist is not just a tool for evaluation but also a guideline for researchers striving to improve the quality and transparency of their work. By adhering to the principles outlined in the RIDGE checklist, researchers can ensure that their developed segmentation models are robust, scientifically valid, and applicable in a clinical setting.
△ Less
Submitted 3 July, 2024; v1 submitted 16 January, 2024;
originally announced January 2024.
-
AI Models Close to your Chest: Robust Federated Learning Strategies for Multi-site CT
Authors:
Edward H. Lee,
Brendan Kelly,
Emre Altinmakas,
Hakan Dogan,
Maryam Mohammadzadeh,
Errol Colak,
Steve Fu,
Olivia Choudhury,
Ujjwal Ratan,
Felipe Kitamura,
Hernan Chaves,
Jimmy Zheng,
Mourad Said,
Eduardo Reis,
Jaekwang Lim,
Patricia Yokoo,
Courtney Mitchell,
Golnaz Houshmand,
Marzyeh Ghassemi,
Ronan Killeen,
Wendy Qiu,
Joel Hayden,
Farnaz Rafiee,
Chad Klochko,
Nicholas Bevins
, et al. (5 additional authors not shown)
Abstract:
While it is well known that population differences from genetics, sex, race, and environmental factors contribute to disease, AI studies in medicine have largely focused on locoregional patient cohorts with less diverse data sources. Such limitation stems from barriers to large-scale data share and ethical concerns over data privacy. Federated learning (FL) is one potential pathway for AI developm…
▽ More
While it is well known that population differences from genetics, sex, race, and environmental factors contribute to disease, AI studies in medicine have largely focused on locoregional patient cohorts with less diverse data sources. Such limitation stems from barriers to large-scale data share and ethical concerns over data privacy. Federated learning (FL) is one potential pathway for AI development that enables learning across hospitals without data share. In this study, we show the results of various FL strategies on one of the largest and most diverse COVID-19 chest CT datasets: 21 participating hospitals across five continents that comprise >10,000 patients with >1 million images. We also propose an FL strategy that leverages synthetically generated data to overcome class and size imbalances. We also describe the sources of data heterogeneity in the context of FL, and show how even among the correctly labeled populations, disparities can arise due to these biases.
△ Less
Submitted 13 April, 2023; v1 submitted 23 March, 2023;
originally announced March 2023.
-
Best Practices and Scoring System on Reviewing A.I. based Medical Imaging Papers: Part 1 Classification
Authors:
Timothy L. Kline,
Felipe Kitamura,
Ian Pan,
Amine M. Korchi,
Neil Tenenholtz,
Linda Moy,
Judy Wawira Gichoya,
Igor Santos,
Steven Blumer,
Misha Ysabel Hwang,
Kim-Ann Git,
Abishek Shroff,
Elad Walach,
George Shih,
Steve Langer
Abstract:
With the recent advances in A.I. methodologies and their application to medical imaging, there has been an explosion of related research programs utilizing these techniques to produce state-of-the-art classification performance. Ultimately, these research programs culminate in submission of their work for consideration in peer reviewed journals. To date, the criteria for acceptance vs. rejection i…
▽ More
With the recent advances in A.I. methodologies and their application to medical imaging, there has been an explosion of related research programs utilizing these techniques to produce state-of-the-art classification performance. Ultimately, these research programs culminate in submission of their work for consideration in peer reviewed journals. To date, the criteria for acceptance vs. rejection is often subjective; however, reproducible science requires reproducible review. The Machine Learning Education Sub-Committee of SIIM has identified a knowledge gap and a serious need to establish guidelines for reviewing these studies. Although there have been several recent papers with this goal, this present work is written from the machine learning practitioners standpoint. In this series, the committee will address the best practices to be followed in an A.I.-based study and present the required sections in terms of examples and discussion of what should be included to make the studies cohesive, reproducible, accurate, and self-contained. This first entry in the series focuses on the task of image classification. Elements such as dataset curation, data pre-processing steps, defining an appropriate reference standard, data partitioning, model architecture and training are discussed. The sections are presented as they would be detailed in a typical manuscript, with content describing the necessary information that should be included to make sure the study is of sufficient quality to be considered for publication. The goal of this series is to provide resources to not only help improve the review process for A.I.-based medical imaging papers, but to facilitate a standard for the information that is presented within all components of the research study. We hope to provide quantitative metrics in what otherwise may be a qualitative review process.
△ Less
Submitted 3 February, 2022;
originally announced February 2022.
-
The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification
Authors:
Ujjwal Baid,
Satyam Ghodasara,
Suyash Mohan,
Michel Bilello,
Evan Calabrese,
Errol Colak,
Keyvan Farahani,
Jayashree Kalpathy-Cramer,
Felipe C. Kitamura,
Sarthak Pati,
Luciano M. Prevedello,
Jeffrey D. Rudie,
Chiharu Sako,
Russell T. Shinohara,
Timothy Bergquist,
Rong Chai,
James Eddy,
Julia Elliott,
Walter Reade,
Thomas Schaffter,
Thomas Yu,
Jiaxin Zheng,
Ahmed W. Moawad,
Luiz Otavio Coelho,
Olivia McDonnell
, et al. (78 additional authors not shown)
Abstract:
The BraTS 2021 challenge celebrates its 10th anniversary and is jointly organized by the Radiological Society of North America (RSNA), the American Society of Neuroradiology (ASNR), and the Medical Image Computing and Computer Assisted Interventions (MICCAI) society. Since its inception, BraTS has been focusing on being a common benchmarking venue for brain glioma segmentation algorithms, with wel…
▽ More
The BraTS 2021 challenge celebrates its 10th anniversary and is jointly organized by the Radiological Society of North America (RSNA), the American Society of Neuroradiology (ASNR), and the Medical Image Computing and Computer Assisted Interventions (MICCAI) society. Since its inception, BraTS has been focusing on being a common benchmarking venue for brain glioma segmentation algorithms, with well-curated multi-institutional multi-parametric magnetic resonance imaging (mpMRI) data. Gliomas are the most common primary malignancies of the central nervous system, with varying degrees of aggressiveness and prognosis. The RSNA-ASNR-MICCAI BraTS 2021 challenge targets the evaluation of computational algorithms assessing the same tumor compartmentalization, as well as the underlying tumor's molecular characterization, in pre-operative baseline mpMRI data from 2,040 patients. Specifically, the two tasks that BraTS 2021 focuses on are: a) the segmentation of the histologically distinct brain tumor sub-regions, and b) the classification of the tumor's O[6]-methylguanine-DNA methyltransferase (MGMT) promoter methylation status. The performance evaluation of all participating algorithms in BraTS 2021 will be conducted through the Sage Bionetworks Synapse platform (Task 1) and Kaggle (Task 2), concluding in distributing to the top ranked participants monetary awards of $60,000 collectively.
△ Less
Submitted 12 September, 2021; v1 submitted 5 July, 2021;
originally announced July 2021.
-
Federated Learning for Breast Density Classification: A Real-World Implementation
Authors:
Holger R. Roth,
Ken Chang,
Praveer Singh,
Nir Neumark,
Wenqi Li,
Vikash Gupta,
Sharut Gupta,
Liangqiong Qu,
Alvin Ihsani,
Bernardo C. Bizzo,
Yuhong Wen,
Varun Buch,
Meesam Shah,
Felipe Kitamura,
Matheus Mendonça,
Vitor Lavor,
Ahmed Harouni,
Colin Compas,
Jesse Tetreault,
Prerna Dogra,
Yan Cheng,
Selnur Erdal,
Richard White,
Behrooz Hashemian,
Thomas Schultz
, et al. (18 additional authors not shown)
Abstract:
Building robust deep learning-based models requires large quantities of diverse training data. In this study, we investigate the use of federated learning (FL) to build medical imaging classification models in a real-world collaborative setting. Seven clinical institutions from across the world joined this FL effort to train a model for breast density classification based on Breast Imaging, Report…
▽ More
Building robust deep learning-based models requires large quantities of diverse training data. In this study, we investigate the use of federated learning (FL) to build medical imaging classification models in a real-world collaborative setting. Seven clinical institutions from across the world joined this FL effort to train a model for breast density classification based on Breast Imaging, Reporting & Data System (BI-RADS). We show that despite substantial differences among the datasets from all sites (mammography system, class distribution, and data set size) and without centralizing data, we can successfully train AI models in federation. The results show that models trained using FL perform 6.3% on average better than their counterparts trained on an institute's local data alone. Furthermore, we show a 45.8% relative improvement in the models' generalizability when evaluated on the other participating sites' testing data.
△ Less
Submitted 20 October, 2020; v1 submitted 3 September, 2020;
originally announced September 2020.