logo
ResearchBunny Logo
The limits of fair medical imaging AI in real-world generalization

Medicine and Health

The limits of fair medical imaging AI in real-world generalization

Y. Yang, H. Zhang, et al.

This study reveals the critical challenges of fairness in medical AI for disease classification across various imaging modalities, highlighting how demographic shortcuts lead to biased predictions. Conducted by Yuzhe Yang, Haoran Zhang, Judy W. Gichoya, Dina Katabi, and Marzyeh Ghassemi, the research uncovers that less demographic attribute encoding in models can yield better performance in diverse clinical settings, emphasizing best practices for equitable AI applications.

00:00
00:00
Playback language: English
Abstract
This study investigates the fairness of medical AI in disease classification across various imaging modalities (radiology, dermatology, ophthalmology) and datasets. The research confirms that medical imaging AI leverages demographic shortcuts, leading to biased predictions. While algorithmic correction improves fairness within the original data distribution, this 'local optimality' doesn't generalize to new test settings. Surprisingly, models with less demographic attribute encoding often perform better ('globally optimal') in out-of-distribution evaluations. The study establishes best practices for maintaining performance and fairness in diverse clinical deployments.
Publisher
Nature Medicine
Published On
Oct 14, 2024
Authors
Yuzhe Yang, Haoran Zhang, Judy W. Gichoya, Dina Katabi, Marzyeh Ghassemi
Tags
medical AI
fairness
disease classification
demographic shortcuts
imaging modalities
algorithmic correction
clinical deployments
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny