logo
ResearchBunny Logo
Black box problem and African views of trust

Medicine and Health

Black box problem and African views of trust

C. Ewuoso

Explore how African scholarship on trust sheds light on the opaque nature of clinical artificial intelligence in healthcare. This research by Cornelius Ewuoso unveils the relational and normative challenges that black box AI presents in health professional-patient relationships, raising critical questions about accountability and transparency.

00:00
00:00
~3 min • Beginner • English
Introduction
Healthcare is poised to be a critical artificial intelligence (AI) consumer. Evidently, there are many advantages of integrating AI—or clinical AI for this article’s purposes—in healthcare services. In addition to its great potential to quickly and objectively process/interpret large complex datasets and surpass health professionals, clinical AI can equally identify or predict essential patterns that may be unrecognized by, or too subtle for, humans. In the process, clinical AI will radically enhance patient precision care, detect diseases, optimize workflow, and increase efficiency in health while reducing personnel and healthcare costs/waste. Notably, studies have found that clinical AI performs better than human surgeons, for example, in safely executing laparoscopic suturing. An advanced AI can function autonomously, that is, without any human in the loop. However, to foster the uptake of AI-based approaches to healthcare and reap its benefits, multiple perspectives are required to address the problems that different clinical AI formats generate, particularly black boxes. Many scholars recognize that the black box problem indeed causes a problem. However, missing in this conversation is how it creates this problem and the specific nature of the problem the black box creates from the African perspective. This article draws on the thinking about trust in African scholarship to describe the nature of the problem black box clinical AI generates in health professional-patient relationships.
Literature Review
The paper situates its argument within existing scholarship on AI explainability and trust, engaging works by Loi et al., Lipton, London, Felder, Duran & Jongsma, among others, and contrasts predominantly Western treatments of trust (e.g., Baier, Goldman, Hatherley, Ferrario) with African perspectives. The author notes conducting a non-systematic search across PhilPapers, PubMed, and Google Scholar using terms such as “trust and Afro-communitarianism,” “trust in Africa,” and “black box problems in healthcare,” yielding over 200 articles that were analyzed to extract relevant views on trust (relational, experience-based, normative) and to map them to black-box AI issues in clinical contexts.
Methodology
The study adopts a philosophical ethics approach that is descriptive/exploratory. It draws on Afro-communitarian conceptions of trust to analyze how black-box clinical AI creates problems in health professional–patient relationships. Methodologically, the author conducted a non-systematic literature search using targeted phrases related to trust in African scholarship and black-box AI in healthcare across databases including PhilPapers, PubMed, and Google Scholar. Over 200 articles were retrieved and analyzed to outline relevant moral norms and to develop the central descriptive claim. The paper follows a typical philosophical structure (introduction, discussion, conclusion) and addresses potential objections to its claims.
Key Findings
- From an African relational view of trust, black-box clinical AI undermines transparency essential to fiduciary clinician–patient relationships, preventing clinicians from explaining whether/how patient values are incorporated in AI recommendations. - From an experience-based view, trustworthiness depends on perceived competence, benevolence, and integrity. Black-box AI impairs clinicians’ competence to understand and justify AI outputs, complicating accountability for harms and impeding clinicians’ ability to act with demonstrated goodwill toward patient values. - From a normative (bidirectional) view of trust as involving giving and accepting vulnerability, black-box AI erodes both patients’ ability to give vulnerability (due to poor informational disclosure and impaired informed decision-making) and clinicians’ ability to accept it (due to loss of epistemic advantage and limited control/awareness over AI-driven actions). - Explainability (design publicity) is necessary to support informed consent, avoid machine paternalism, enable fiduciary transparency, and satisfy epistemic justice by allowing patients to contribute to the knowledge production shaping recommendations. - Accountability is hampered by opacity: clinicians often lack sufficient control and awareness/explainability to be accountable for AI-influenced decisions; harms are hard to localize within AI processes, and errors can be amplified by adaptive systems. - Extrinsic opacity (e.g., proprietary protections) can also generate black-box problems; proposed mitigations include ethical standards-setting with industry, independent clinical review, and sharing feature attribution per output; training clinicians for responsible AI use and bias recognition is advised.
Discussion
The paper asks how African conceptions of trust clarify the specific problems posed by black-box clinical AI in clinician–patient relationships. It clarifies black-box AI as unexplainable due to lack of design publicity or opaque, unsurveyable internal operations. In clinical contexts, explanation and understanding are crucial for informed consent and fiduciary transparency. Mapping African trust concepts: (1) Relational trust highlights the necessity of transparency to sustain fiduciary relations; black-boxing impedes clinicians’ ability to show how patient values inform recommendations and risks machine paternalism. (2) Experience-based trust emphasizes competence, benevolence, and integrity; opacity undermines clinicians’ competence to interpret AI, weakens justifications to patients (e.g., value conflicts like Jehovah’s Witness blood transfusion cases), and complicates accountability because clinicians lack control/awareness of AI decision paths. (3) Normative trust frames trust as a bidirectional duty involving giving and accepting vulnerability; black-boxing deprives clinicians of epistemic advantage to responsibly accept patient vulnerability and limits patients’ ability to exercise autonomy via informed decisions and dialogue, thereby degrading communication (inability to provide material information and reasons for rankings/probabilities). The paper addresses objections: (a) that human-in-the-loop, option-ranking AIs avoid the problem—reply: epistemic justice requires patient participation in knowledge production underlying options, which opacity blocks. (b) that explainability is secondary to outcomes—reply: in fiduciary contexts, reasons and transparency matter for autonomy, trust, and accountability; overreliance on opaque systems risks ethical deficits and exclusion of patient values. (c) that extrinsic proprietary secrecy is the core problem—reply: while extrinsic opacity is serious, it can be mitigated via stakeholder collaboration, ethical standards, independent review, and feature attribution disclosures; nonetheless, intrinsic opacity already undermines trust as analyzed.
Conclusion
The article contributes an African perspective to understanding how black-box clinical AI undermines trust within clinician–patient fiduciary relations. Under the black-box assumption, relational trust is compromised (lack of transparency about value integration), experience-based trust is strained (competence, benevolence, integrity, and accountability are impeded), and the normative, bidirectional vulnerability exchange is undermined (patients cannot adequately give, and clinicians cannot adequately accept, vulnerability). Given trust’s centrality to global acceptance of clinical AI, the paper urges further research: examining diverse groups’ responses to AI formats, identifying requirements for global acceptance, analyzing extrinsic unexplainability (e.g., IP vs. informed decision rights), and delineating boundaries where healthcare automation becomes impermissible.
Limitations
The analysis is descriptive/philosophical and non-systematic; descriptive claims may be contested. The paper focuses primarily on intrinsic opacity, acknowledging that it does not deeply interrogate extrinsic unexplainability (e.g., proprietary constraints) and the balance between intellectual property rights and patients’ rights to informed decision-making. No empirical data are collected; proposals for mitigation (e.g., feature attribution, independent review, training) are not empirically evaluated.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny