RadImageNet pretrained convolutional neural networks (CNNs)
2026-01-24https://doi.org/10.1148/atlas.1769274475598
70
Overview
Schema Version
https://atlas.rsna.org/schemas/2025-11/model.json
Name
RadImageNet pretrained convolutional neural networks (CNNs)
Link
https://github.com/BMEII-AI/RadImageNet
Indexing
Keywords: RadImageNet, transfer learning, pretrained models, medical imaging, CT, MRI, ultrasound, dataset, Grad-CAM
Content: CT, MR, US, CH, NR, MK, GI, OI, IN
SNOMED: 237495005, 1386000, 840539006, 233604007
Author(s)
Xueyan Mei
Zelong Liu
Philip M. Robson
Brett Marinelli
Mingqian Huang
Amish Doshi
Adam Jacobi
Chendi Cao
Katherine E. Link
Thomas Yang
Ying Wang
Hayit Greenspan
Timothy Deyer
Zahi A. Fayad
Yang Yang
Organization(s)
BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai
Department of Diagnostic, Interventional and Molecular Radiology, Icahn School of Medicine at Mount Sinai
Department of Mathematics, University of Oklahoma
Department of Radiology, Weill Cornell Medicine
Department of Radiology, East River Medical Imaging
Version
1.0
License
Text: CC BY 4.0
URL: https://creativecommons.org/licenses/by/4.0/
Contact
Yang Yang; email: ude.ainigriv@cc5yy
Funding
Authors declared no funding for this work.
Ethical review
Institutional review boards waived the requirement for written informed consent for this retrospective, HIPAA-compliant study using de-identified data.
Date
Updated: 2022-09-01
Published: 2022-07-27
Created: 2021-12-17
References
[1] Mei X, Liu Z, Robson PM, et al.. "RadImageNet: An Open Radiologic Deep Learning Research Dataset for Effective Transfer Learning". Radiology: Artificial Intelligence. 2022 Sep;4(5):e210315.. 2022-07-27. doi:10.1148/ryai.210315. PMID: 36204533. PMCID: PMC9530758.
Model
Architecture
Convolutional Neural Networks: Inception-ResNet-v2, ResNet50, DenseNet121, and InceptionV3 trained from scratch on RadImageNet with randomly initialized weights.
Availability
Pretrained models and code: https://github.com/BMEII-AI/RadImageNet; Data access requests: http://radimagenet.com
Clinical benefit
Provides improved starting weights for transfer learning in radiologic AI applications, especially for small datasets, yielding higher AUCs and more consistent, localized attention maps compared with ImageNet pretraining.
Degree of automation
Automates feature extraction and provides pretrained weights for downstream model development; not a standalone diagnostic device.
Indications for use
Research use for initializing deep learning models in medical imaging tasks (classification and segmentation) across CT, MRI, and ultrasound domains.
Input
Grayscale CT, MRI, and ultrasound key images resized to 224×224 pixels for pretraining (RadImageNet) and 256×256 pixels for downstream tasks.
Instructions
For fine-tuning, unfreezing all layers consistently achieved best performance; a smaller learning rate (e.g., 0.0001) is suggested when training all layers. Add global average pooling, dropout (0.5), and softmax output for classification; use patient-wise data splits.
Limitations
Single-image key findings may not reflect full clinical workflow; some images contain multiple findings but only one label was used; ROIs defined during clinical interpretation were not used in training; reduced-resolution images may obscure small findings; 165 categories grouped by ICD-10 and imaging characteristics are not diagnostic; no radiography images in the RadImageNet pretraining dataset; number of classes fewer than ImageNet.
Output
CDEs: RDE205, RDE746, RDE226
Description: During pretraining, models output probabilities across 165 pathologic labels (softmax). When transferred, outputs include task-specific classification scores and Grad-CAM localization maps; Dice used when segmentation ground truth available.
Recommendation
Use RadImageNet-pretrained weights as initialization for medical imaging tasks, particularly when target datasets are small or modality/anatomic region overlaps with CT, MRI, or ultrasound.
Regulatory information
Comment: Research-only pretrained models and dataset; no regulatory clearance claimed.
Reproducibility
Training-testing splits were patient-wise; multiple architectures and 24 fine-tuning scenarios reported; code and pretrained weights are publicly available.
Use
Intended: Other
Out-of-scope: Diagnosis
Excluded: Other
User
Intended: Researcher
Out-of-scope: Patient, Layperson
Excluded: Referring provider