CheXpert chest radiograph test annotations (8 radiologists, 500 images, 14 classes)
dataset2026-01-28https://doi.org/10.1148/atlas.1769620318695
17824

Overview

Schema Version

https://atlas.rsna.org/schemas/2025-11/dataset.json

Name

CheXpert chest radiograph test annotations (8 radiologists, 500 images, 14 classes)

Link

https://pmc.ncbi.nlm.nih.gov/articles/PMC10077093

Indexing

Keywords: CheXpert, chest radiograph, annotation, label noise, gold label, schema
Content: CH

Author(s)

Irvin J
Rajpurkar P
Ko M

Ethical review

No human research was performed; therefore, this study was exempt from institutional review board review (as stated for the analysis using these annotations).

Comments

This study analyzed schema and label noise using the CheXpert test annotation dataset. Eight annotators labeled 500 chest radiograph images for 14 classes; gold labels were defined by majority vote of five randomly chosen annotators from the eight.

Date

Published: 2019-01-01

References

[1] Irvin J, Rajpurkar P, Ko M, et al.. "CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison". Proc AAAI Conf Artif Intell. 2019-01-01.
[2] Garbin C, Rajpurkar P, Irvin J, Lungren MP, Marques O. "Structured dataset documentation: a datasheet for CheXpert". arXiv:2105.03020. 2021-05-07. Available from: https://arxiv.org/abs/2105.03020

Dataset

Motivation

To explore schema variation and quantify label noise in chest radiograph classification using trusted gold labels.

Noise

Label noise varies by class; high agreement for pneumothorax and support devices (>90% percent agreement) and low for pneumonia and consolidation.

External data

Annotations derived from the CheXpert test set (8 annotators, 500 images, 14 classes).

Confidentiality

Data obtained without right of redistribution; access can be requested from the authors of CheXpert.