logo
ResearchBunny Logo
Shaping the future of AI in healthcare through ethics and governance

Medicine and Health

Shaping the future of AI in healthcare through ethics and governance

R. Bouderhem

This research conducted by Rabâi Bouderhem delves into the multifaceted challenges posed by Artificial Intelligence in healthcare, from ethical dilemmas and data privacy issues to the need for enhanced international regulations. It underscores the immense potential of AI in transforming diagnostics and patient care while proposing solutions for a more equitable health landscape.... show more
Introduction

Regulating the use of AI in healthcare (Bouderhem 2022, 2023) and its challenges at the national, regional and international levels is a complex and crucial topic. AI systems have the potential (Davenport and Kalakota 2019) to improve health outcomes, enhance research and clinical trials, facilitate early detection and diagnostics for better treatment, empower both health employees and patients who can rely on health monitoring in remote areas or developing countries. However, AI also poses ethical, legal, and social risks (Jiang et al. 2021), such as data privacy, algorithmic bias, patient safety, and environmental impact (Stahl et al. 2023). The WHO has published two reports on the use of AI systems in healthcare, respectively in 2021 and 2023 (WHO 2021, 2023). The WHO’s reports outline key considerations and principles for the ethical and responsible use of AI systems. AI for health should be grounded in equity and should aim to respect human dignity, fundamental rights and values. AI systems should promote equity, fairness, inclusiveness, and accountability. The WHO's reports also highlight the challenges, legal and ethical gaps and voids that exist today on AI for health. There is currently a lack of harmonization and coordination between states and key stakeholders with few harmonized standards such as data privacy. It is extremely difficult for regulatory authorities to keep up with the rapid pace of innovation; AI models and generative AI such as ChatGPT are a clear illustration with unown results and difficulties to predict the impact of their workings on healthcare systems. The WHO considers that there is a need for proactive building collaboration among these different actors. The WHO is trying to develop shared standards and scopes to help the implementation of its principles and recommendations at the national and regional levels. The WHO also encourages the development of informed and inclusive approaches to the regulation of AI for health, such as co-regulation, self-regulation (Schmidt et al. 2023), and adaptive regulation, that can balance the benefits and risks of AI, and that can foster trust and confidence among the public and the health sector. However, these rules are merely a guidance for WHO Members and do not create any legal obligations. Therefore, the WHO should establish legally binding rules in AI healthcare, as it is the rightful authority to monitor global health and specifically health rights. The International Health Regulations (IHR) adopted by the 58th World Health Assembly in 2005 through the Resolution of the World Health Assembly (WHA) 63.5 (WHO, IHR 2005) should be amended to better reinforce the care of AI systems in healthcare. Also, the importance of regional regulations such as EU regulations should not be minimized (Boursier et al. 2019). The EU General Data Protection Regulation (GDPR) (EU Official Journal 2016), Data Act (EU Commission 2022) and Artificial Intelligence (AI) Act (EU Commission 2021) could serve as a legal model for WHO Members in adopting newly legally binding rules for ethical and responsible AI systems in healthcare and other fields. The objective of the EU Data Act is to harmonize rules relating to fair access to data and its use by public and private actors. As its predecessor the GDPR, the EU Data Act will help patients to keep control over their health data more efficiently. EU authorities have proposed a comprehensive legal framework for the regulation and promotion of ethical and responsible AI systems in healthcare and other fields. Such AI systems should be aligned with the principles of human-centricity, trustworthiness and sustainability (Griffis et al. 2023). The AI Act ensures that AI systems are safe, reliable, and respect fundamental human rights such as the right to privacy; another fundamental aspect for EU authorities is to develop innovation and competitiveness. Also, the AI Act aims to enhance cooperation and coordination among EU Member States and stakeholders. The AI Act legal framework also takes into consideration the global nature of AI. It is expected that the AI Act will promote the EU’s leadership and influence in the international field of data protection regulation as it was the first to enact the GDPR which inspired regions and countries (Bentoavone et al. 2022). Expectations for the AI Act are very high as observers believe that the new regulation will provide legal certainty and trust for all AI stakeholders. A provisional agreement has been reached on 3 December 2023 (European Parliament 2023), which suggests that the AI Act could enter into force in 2024. On 24 January 2024, the European Commission adopted a decision establishing the European Artificial Intelligence Office which is intended to become a key body responsible for overseeing 'the advancements in artificial intelligence models, including as regards general-purpose AI models, the interaction with human autonomy, and [which] should play a key role in investigations and enforcement of the global vocation' (EU Commission 2024). On 2 February 2024, the AI Act was unanimously approved by the Council of EU Ministers (EU Council 2024). On 13 February 2024, the AI Act has entered its last legislative stage as it has been approved following discussions on a compromise deal between the European Commission, the Council of EU Ministers and the joint committee on Internal Market and Consumer Protection and committee on Civil Liberties, Justice and Home Affairs (European Parliament 2024). The AI Act is now set to finally approved by the European Parliament in a plenary session scheduled for 10 April 2024 (Gibney 2024). EU regulations usually come into force in the week following their publication. The EU Act should also bear in mind the stricter definitions of risks and obligations. Regarding the regulation of AI in healthcare, it can be argued that states have a general obligation of cooperation under the United Nations (UN) Charter, including in health mandates. Therefore, the WHO should remind states of their duties to ensure the effective use of AI in healthcare.

Literature Review
Methodology

This research focuses on publicly available data up to February 2024. Data collected and articles were first screened according to title and abstract and then the full texts of eligible articles were evaluated. Using the same search queries, a very literature research was performed in English on the Google Scholar search engine, retrieving articles focusing on the use of AI in healthcare, while paying attention to nuanced applications not primarily mentioned by the EU or the WHO. I also searched for articles relating to the concrete applications of AI healthcare to determine how the health systems deal with challenges pertaining to AI. I searched WHO's institutional repositories for additional information. I examined the insufficiency of legal framework applicable to the use of AI in healthcare mostly by rules emphasizing the necessity of established legally binding rules as an illustration of the prominent gap that exists in the EU in comparison to regulations established by the WHO. This research highlighted the need for clearer standards as it is perceived by the international community for the rules of the WHO. Thus some international and global laws should be privileged by the international community as regards those applicable to the use of AI in healthcare. Still, there is no universal agreement on the use of AI in healthcare.

Key Findings
  • AI in healthcare spans multiple applications: care management, medical imaging analysis, drug discovery and repurposing, forecasting kidney disease, oncology (diagnosis, prognosis, treatment, follow-up), precision medicine, AI-based diagnostics, and remote health monitoring (Table 1).
  • Documented benefits include improved detection and diagnostics (e.g., MRI, X-ray analysis), reduced errors, efficiency gains, cost reduction, and enhanced access (telemedicine, remote areas).
  • Major challenges identified (Table 2): data privacy and security; data collection, storage, quality and availability; interoperability; bias and discrimination; health equity and fairness; affordability and access (especially in developing countries); governance and regulation; third-party access control; explainability; transparency and accountability; implementation and adoption barriers; errors/misdiagnosis; and performance monitoring.
  • Explainability and trust: black-box models undermine trust; need for explainable AI to support informed consent and accountability.
  • Regulatory gaps: WHO guidance is non-binding; current frameworks struggle to keep pace with rapid AI innovation (e.g., LLMs like ChatGPT). There is a lack of harmonization across jurisdictions.
  • EU as a model: GDPR, Data Act (entered into force 11 Jan 2024; main provisions applicable from 12 Sep 2025), and AI Act (progressed through EU institutions in early 2024) provide comprehensive, risk-based governance, transparency duties, and user rights.
  • AI Act risk taxonomy (Table 5): unacceptable risk (bans), high risk (ex-ante and lifecycle oversight), general-purpose/generative AI (transparency obligations), limited risk (minimal transparency).
  • WHO guiding principles (Table 6): protecting autonomy, promoting safety, ensuring transparency, fostering responsibility, ensuring equity, and promoting sustainable AI.
  • Privacy safeguards recommended (Table 3): staff education, routine risk assessments, VPN use, restricted and role-based access, two-factor authentication, encryption, and security awareness training.
  • Governance solutions (Table 4): legally binding WHO rules/standards (e.g., via IHR reform), stronger regulatory oversight, transparency and accountability requirements, encouragement of industry self-regulation, international cooperation, ethical use of personal health data, and building an AI culture across stakeholders.
  • Strategic proposal: elevate WHO’s role to establish binding norms (e.g., amend IHR 2005) and operationalize the UN duty to cooperate for equitable, safe, and rights-respecting AI in health.
Discussion

The study set out to identify and evaluate the technical, ethical, and regulatory challenges of AI in healthcare and to propose governance pathways that maximize benefits while mitigating risks. The mapping of concrete AI applications demonstrates clear clinical and operational value (diagnostics, precision medicine, monitoring), directly addressing the promise of better outcomes and efficiency. However, the catalog of challenges—privacy/security, bias, data quality and representativeness, interoperability, explainability, and accountability—shows why current governance is insufficient and uneven across jurisdictions. To bridge this gap, the paper argues that WHO guidance, while influential, lacks legal force; therefore, elevating WHO’s role via binding norms (e.g., IHR amendments) and leveraging the UN Charter’s duty to cooperate can provide the necessary global scaffolding. In parallel, the EU’s comprehensive approach (GDPR, Data Act, AI Act) offers an actionable regulatory template: risk-based oversight, transparency obligations, and strong data rights can enhance safety, fairness, and trust. Implementing concrete safeguards (access controls, encryption, staff training) and fostering an AI culture through education and continuous monitoring can operationalize these principles at the point of care. Together, these findings support a multi-level governance model—international (WHO/IHR), regional (EU), and organizational (health systems, developers)—to harmonize standards, ensure equity and safety, and maintain public trust while enabling innovation.

Conclusion

The current legal framework for AI in healthcare is fragmented and often non-binding, limiting effective oversight, equity, and trust. Given AI’s rapid evolution and system-wide implications, the WHO should assume a stronger normative role by promoting legally binding international standards—potentially through amendments to the International Health Regulations—to address safety, privacy, bias, transparency, and accountability. The EU’s GDPR, Data Act, and AI Act provide a robust reference model for risk-based regulation, data governance, and stakeholder coordination. Implementing practical privacy and security safeguards, promoting transparency and explainability, and building an AI culture across stakeholders are essential. Ultimately, coordinated international cooperation is needed to ensure AI advances global health goals, improves access (especially in low-resource settings), and protects fundamental rights. WHO Members should actively cooperate to elaborate new guidelines and binding rules under the IHR.

Limitations
  • Scope limited to publicly available sources up to February 2024.
  • Literature search primarily conducted in English via Google Scholar and WHO repositories, which may introduce selection and language bias.
  • Narrative policy analysis without primary empirical data or quantitative evaluation.
  • Absence of a universally agreed framework for AI in healthcare may affect generalizability of recommendations across jurisdictions.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny