logo
ResearchBunny Logo
Disability and algorithmic fairness in healthcare: a narrative review

Medicine and Health

Disability and algorithmic fairness in healthcare: a narrative review

Y. Vogt

AI promises fairer healthcare, but people with disabilities are often left out. This narrative review synthesizes literature on algorithmic (un)fairness in healthcare, highlights five key areas of concern and potential solutions, and calls for better representation and legal safeguards. This research was conducted by Yvonne Vogt.... show more
Introduction

Artificial intelligence (AI) has rapidly expanded across healthcare, promising improved quality, accessibility, and equity. Despite anticipated benefits, AI has sometimes exacerbated health inequities, particularly among minority groups. Compared to extensive discussions of bias related to race and gender, there is a notable gap concerning algorithmic unfairness affecting people with disabilities (PWD), who represent approximately 15% of the global population. Definitions of fairness vary and are often treated as technical measures, which influences how systems are evaluated. Prior work (e.g., El Morr et al.) found that disability-related AI studies rarely quantify or discuss AI bias and tend to adopt a medical rather than socio-ethical lens. This review addresses the guiding question: What does algorithmic bias in general healthcare applications for PWD involve and how can it be addressed? The purpose is to synthesize current issues and potential solutions to inform more inclusive and equitable AI in healthcare for PWD.

Literature Review

The narrative review aggregates socio-ethical and algorithmic literature on AI fairness in healthcare, with emphasis on PWD. Socio-ethical strands highlight that fairness encompasses justice, equity, discrimination, and equality, with technical definitions dominating current practice. Multiple fairness notions cannot be satisfied simultaneously (impossibility theorem), suggesting context-dependent, multidimensional approaches. For PWD, individual fairness is often more appropriate than group fairness, and "fairness through awareness"—explicit use of protected attributes—facilitates mitigation. Regulatory and legal literature points to accountability and liability gaps, nascent oversight (WHO frameworks, EU ethics guidelines and AI Act, FDA guidance), and tensions between privacy and fairness for sensitive disability data. Privacy-preserving strategies (synthetic data, federated learning, added noise, blockchain) and informed consent are discussed, alongside barriers posed by the digital divide and affordability of assistive AI. Algorithmic and data-centric literature emphasizes participatory, PWD-centered design, interdisciplinary collaboration, explainability/XAI, human-in-the-loop integration, practitioner/patient education, and ongoing monitoring for drift. Data issues include non-categorical, intersectional disability attributes; heterogeneous, multimodal, incomplete, and historically biased sources; under- and misrepresentation of PWD; and dataset shift. Proposed remedies include narrative/context-rich EHRs processed via NLP, shared labeling standards, targeted data collection and benchmarks, transparency about gaps, IoT-based standardized data capture, and pre-/in-/post-processing bias mitigation (reweighing, resampling, cost functions, hyperparameter tuning, blinding). Algorithmic structure can introduce or amplify bias; solutions include adjusted loss functions, adversarial approaches, transfer learning, and careful output correction. The literature underscores trade-offs (accuracy vs fairness, interpretability vs performance), trust gaps, scalability and energy costs, and the need for lifecycle-informed ethical processes.

Methodology

Databases searched: PubMed Central, Springer Link, Science Direct, with preliminary inclusion of Google Scholar results. Timeframe: January 2019 to December 2024. Language: English. Publication types: articles, review articles, book chapters. Subject areas included: medicine and public health, computer science, science, humanities and social sciences, multidisciplinary, public health, life sciences, law, philosophy, social sciences. Search queries: ("fair*" OR "discrimin*" OR "bias") AND ("algorithm*" OR "machine learning" OR "data") AND ("disability" OR "impairment") AND ("healthcare"); for Science Direct, analogous non-truncated terms ("fair" OR "discriminatory" OR "bias") etc. PubMed Central returned ~100,000 results sorted by relevance; only the first 3,000 were screened due to diminishing relevance. Springer Link and Science Direct were fully searched. Google Scholar preliminary search covered the first few result pages using the same queries and conditions. Inclusion considered papers discussing algorithmic bias and fairness in healthcare, including those not specifically focused on PWD to capture broader fairness issues that affect PWD. The review is narrative and not exhaustive; selection was guided by relevance to AI fairness and disability.

Key Findings

• Corpus: 66 papers; 26% focused substantially on disability, 74% on broader AI fairness in healthcare. • Five key areas of concern and solutions: Socio-ethical: (I) Fairness definitions and measures—varying statistical/technical definitions, need for context-dependent, multidimensional measures; prioritize individual fairness and "fairness through awareness"; develop fine-grained tests with PWD including outliers. (II) Regulations, laws, and rights—accountability/liability gaps; need for clearer legal frameworks and regulatory oversight; tension between privacy and fairness for disability data; promote secure data governance, informed consent, and accessibility/affordability. Algorithmic/data-driven: (III) Design and use—lack of PWD involvement; "one-size-fits-all" models; black-box opacity; trust and misuse; recommend PWD-centered participatory design, interdisciplinary collaboration, explainability/XAI, human-in-the-loop, education, lifecycle monitoring for drift. (IV) Training data—non-categorical intersectional disability attributes; heterogeneous, incomplete, and biased datasets; under/misrepresentation; propose narrative EHRs with NLP, shared labeling standards, targeted collection and diverse benchmarks, IoT data, transparency about gaps, and standard bias mitigation techniques. (V) Algorithm structure—models can create/amplify bias; mitigate via adjusted loss, adversarial/in-processing constraints, transfer learning, and cautious output correction; monitor overfitting and catastrophic forgetting. • Evidence highlights trade-offs (accuracy-fairness, interpretability-performance), trust gaps among practitioners/patients/PWD, scalability and energy costs, and benefits of heterogeneous disability attribute distributions for improved recognition performance.

Discussion

Findings indicate that biases can be introduced at every stage of AI development—from conception through deployment—and are tightly interwoven with socio-ethical determinants. For PWD, deficiencies in data quantity and quality mirror broader societal and legal invisibility, compounding algorithmic inequities. Excluding PWD from design phases leads to tools that frustrate rather than assist, while robust inclusion and participatory approaches promise greater usability and justice. The review suggests that large, general-purpose healthcare AI systems are less feasible and fair for diverse subgroups than specialized, context-specific tools that allow oversight. Addressing the identified issues can improve AI-driven tools, such as remote patient monitoring, by tailoring systems to PWD needs, ensuring strong privacy/security, validating in real-world settings with PWD feedback, and training stakeholders on capabilities and limitations. Overall, integrating socio-ethical and technical perspectives in a lifecycle-informed process is essential to achieve equitable outcomes.

Conclusion

This narrative review synthesizes recent literature on AI fairness, disability, and healthcare to identify issues and solutions affecting PWD. Future reviews should examine specialized healthcare contexts and intersectional disadvantages (e.g., disability with race or gender), recognizing demographic groups as multidimensional rather than categorical. AI fairness in healthcare is complex and sensitive, with potential to harm PWD more imminently than other AI domains. While AI may support care and accessibility, much research and policy change must occur across multiple areas simultaneously to achieve true algorithmic fairness for PWD.

Limitations

The review used a limited set of databases and applied inclusion criteria that excluded seminal pre-2019 publications, non-English works, and formats beyond articles, reviews, and book chapters, so it is not comprehensive. The socio-ethical versus algorithmic categorizations are heuristic and overlapping rather than definitive. Disability was analyzed broadly, without fine-grained differentiation by specific disability types, suggesting a need for more granular future work.

Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny