CheXDet
model2026-01-24https://doi.org/10.1148/atlas.1769273838256
70

Overview

Schema Version

https://atlas.rsna.org/schemas/2025-11/model.json

Name

CheXDet

Link

https://dx.doi.org/10.1148/ryai.210299

Indexing

Keywords: Computer-aided Diagnosis, Conventional Radiography, Convolutional Neural Network (CNN), Deep Learning, Machine Learning, Lesion localization, Chest radiograph, Shortcut learning, Generalizability
Content: CH
RadLex: RID3875, RID34539, RID5335, RID39056, RID35057, RID5352

Author(s)

Luyang Luo
Hao Chen
Yongjie Xiao
Yanning Zhou
Xi Wang
Varut Vardhanabhuti
Mingxiang Wu
Chu Han
Zaiyi Liu
Xin Hao Benjamin Fang
Efstratios Tsougenis
Huangjing Lin
Pheng-Ann Heng

Organization(s)

The Chinese University of Hong Kong, Department of Computer Science and Engineering
The Hong Kong University of Science and Technology, Department of Computer Science and Engineering
Imsight Technology, AI Research Laboratory
The University of Hong Kong, Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine
Shenzhen People’s Hospital, Department of Radiology
Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Department of Radiology
Queen Mary Hospital, Department of Radiology
Hospital Authority, Hong Kong, Artificial Intelligence Laboratory, Head Office Information Technology and Health Informatics Division
Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences

Version

1.0

Funding

Key-Area Research and Development Program of Guangdong Province, China (2020B010165004, 2018B010109006); Hong Kong Innovation and Technology Fund (ITS/311/18FP); HKUST Bridge Gap Fund (BGF.005.2021); National Natural Science Foundation of China (U1813204); Shenzhen-HK Collaborative Development Zone.

Ethical review

Retrospective study approved by institutional ethical committee (approval no. YB-2021–554); requirement for individual patient consent was waived; all institutional data were de-identified.

Date

Published: 2022-07-20
Created: 2021-12-15

References

[1] Luo L, Chen H, Xiao Y, Zhou Y, Wang X, Vardhanabhuti V, Wu M, Han C, Liu Z, Fang XHB, Tsougenis E, Lin H, Heng P-A. "Rethinking Annotation Granularity for Overcoming Shortcuts in Deep Learning–based Radiograph Diagnosis: A Multicenter Study". Radiology: Artificial Intelligence. 2022 Sep;4(5):e210299.. 2022-07-20. doi:10.1148/ryai.210299. PMID: 36204545. PMCID: PMC9530769.

Model

Architecture

Two-stage object detection network: EfficientNet backbone for feature extraction; three Bidirectional Feature Pyramid Network (BiFPN) layers for multi-scale feature aggregation; Region Proposal Network (RPN) and ROI Align for proposal generation; head with four convolutional layers followed by two fully connected layers for classification and bounding-box regression.

Availability

Source code available upon reasonable request from the corresponding author (H.C.).

Clinical benefit

Assists chest radiograph diagnosis by localizing lesions and classifying nine thoracic abnormalities; improved generalizability versus image-level classifiers by reducing shortcut learning.

Clinical workflow phase

Clinical decision support systems; research evaluation.

Decision threshold

For lesion localization, bounding boxes were filtered at the threshold maximizing (sensitivity + specificity); for image-level classification from CheXDet, the maximum box probability per class was used as the radiograph-level score with ROC analysis across thresholds.

Degree of automation

Fully automated image analysis producing lesion bounding boxes and class probabilities.

Indications for use

Research use on frontal chest radiographs to detect and localize nine abnormalities (cardiomegaly, pleural effusion, mass, nodule, pneumonia, pneumothorax, tuberculosis, fracture, aortic calcification).

Input

Frontal chest radiographs (DICOM) preprocessed and resized to 768×768; trained with lesion-level bounding-box annotations for nine diseases.

Instructions

Normalize inputs to zero mean/unit variance; construct 3-channel input by stacking grayscale image; resize to 768×768; apply EfficientNet-BiFPN backbone; during evaluation, derive radiograph-level scores by maximum predicted box probability per class; for detection evaluation, threshold boxes at point maximizing sensitivity+specificity.

Limitations

External performance was lower than internal; external datasets covered a subset of disease categories; fine-grained annotations increase labeling burden; despite improvements, evidence suggests some residual shortcut learning; models developed and evaluated on frontal CXRs only.

Output

CDEs: RDE2154, RDE2330, RDE2893, RDE2294, RDE339, RDE2295, RDE1702.21, RDE1370
Description: Per-image lesion bounding boxes with associated disease class probabilities; radiograph-level probability per class (maximum over predicted boxes).

Recommendation

Use fine-grained lesion-level annotations to train and apply the model for improved generalizability and more accurate lesion-focused decision-making compared with image-level labels alone.

Reproducibility

Model development details, hyperparameters, and procedures provided in supplemental Appendix E1; preprocessing and evaluation procedures described; code available upon request from the corresponding author.

Use

Intended: Decision support, Detection and diagnosis
Out-of-scope: Mitigation
Excluded: Detection and diagnosis

User

Intended: Radiologist, Researcher