Code-free deep learning platforms for chest radiograph analysis: evaluation study models
model2026-01-24https://doi.org/10.1148/atlas.1769271923801
81

Overview

Schema Version

https://atlas.rsna.org/schemas/2025-11/model.json

Name

Code-free deep learning platforms for chest radiograph analysis: evaluation study models

Link

https://pubmed.ncbi.nlm.nih.gov/37035428/

Indexing

Keywords: Code-free deep learning, Automated machine learning, Chest radiographs, Pneumonia, Pneumothorax, Multilabel classification, Object detection, Segmentation, External validation, Usability
Content: CH, IN, RS
RadLex: RID10345, RID5350, RID5352
SNOMED: 36118008, 233604007

Author(s)

Samantha M. Santomartino
Nima Hafezi-Nejad
Vishwa S. Parekh
Paul H. Yi

Organization(s)

University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine
The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine
Department of Computer Science, Whiting School of Engineering, Johns Hopkins University
Malone Center for Engineering in Healthcare, Johns Hopkins University

Version

1.0

Contact

Paul H. Yi (email: ude.dnalyramu.mos@iyp)

Funding

Amazon Web Services granted a proof-of-concept credit for investigation of the Amazon platform only; no other industry funds were provided.

Ethical review

All images were de-identified and from public, open access databases; no institutional review board approval was required (HIPAA compliant).

Date

Updated: 2023-03-01
Published: 2023-02-15
Created: 2022-03-28

References

[1] Santomartino SM, Hafezi-Nejad N, Parekh VS, Yi PH. "Performance and Usability of Code-Free Deep Learning for Chest Radiograph Classification, Object Detection, and Segmentation". Radiology: Artificial Intelligence. 2023;5(2):e220062.. 2023-02-15. doi:10.1148/ryai.220062. PMID: 37035428. PMCID: PMC10077092.

Model

Architecture

Proprietary code-free deep learning platforms (Amazon Rekognition Custom Labels, Apple Create ML, Clarifai Train, Google Cloud AutoML Vision, MedicMind DL Training Platform, Microsoft Azure Custom Vision); specific model architectures and hyperparameters not disclosed by platforms.

Availability

Models were trained within commercial CFDL platforms’ environments for the study; no standalone model artifact provided.

Clinical benefit

Research evaluation only; not intended for clinical diagnosis. Study assessed feasibility and performance of CFDL platforms on chest radiograph tasks.

Clinical workflow phase

Research and evaluation; not for clinical deployment.

Decision threshold

Where applicable, evaluated at default threshold 0.5 (Clarifai, Google, Microsoft).

Degree of automation

Automated model design and training within CFDL platforms; required some coded solutions for data preparation and upload.

Indications for use

Not a medical device; study models aimed at classifying thoracic diseases on CXRs, detecting pneumonia bounding boxes, and segmenting pneumothorax in de-identified public datasets.

Input

Chest radiograph images from public datasets (Guangzhou pediatric CXR, NIH-CXR14, RSNA Pneumonia Detection Challenge, SIIM-ACR Pneumothorax; external testing with NIH-CXR14 pediatric subset and CheXpert).

Instructions

Models were trained using each platform’s GUI and/or supported code-based upload; data split 80/10/10 when allowed; trained up to free tier or <$100 budget with early stopping by platforms.

Limitations

Poor external generalizability; frequent platform crashes and need for coding for data organization/upload; limited support for object detection and segmentation; lack of transparency on preprocessing, architectures, and hyperparameters; limited or no access to raw predictions; limited external testing functionality; inability to include negative images for Google object detection training in this study; segmentation training unsuccessful.

Output

CDEs: RDE374, RDE2459, RDE2439, RDE339
Description: Platform-dependent outputs: image-level classification labels (single- and multilabel), bounding boxes for object detection, and pixel-level masks for segmentation (segmentation not successfully trained).

Recommendation

Authors recommend caution; CFDL platforms, as evaluated, are not yet suitable for chest radiograph diagnosis and may have limited accessibility without coding experience.

Regulatory information

Comment: Research-only evaluation of commercial code-free DL platforms; no regulatory authorization claimed.

Reproducibility

Raw image-level prediction outputs were not accessible from platforms; platform-managed data splits and training details limit reproducibility.

Sustainability

Training constrained to free tier or <$100 per model; cloud costs included storage, transactions, and batch predictions; runtime and energy use not reported.

Use

Intended: Image segmentation, Detection and diagnosis
Out-of-scope: Decision support, Diagnosis
Excluded: Other

User

Intended: Referring provider, Researcher
Out-of-scope: Patient
Excluded: Physician