logo
ResearchBunny Logo
Understanding the dilemma of explainable artificial intelligence: a proposal for a ritual dialog framework

Computer Science

Understanding the dilemma of explainable artificial intelligence: a proposal for a ritual dialog framework

A. Bao and Y. Zeng

This research by Aorigele Bao and Yi Zeng explores the multifaceted understanding of Explainable Artificial Intelligence (XAI). It delves into contrastive, functional, and transparent perspectives while proposing the Ritual Dialog Framework (RDF) to enhance communication between AI creators and users, ultimately aiming to build trust and improve acceptance of XAI.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper examines how users achieve understanding from XAI, distinguishing explanation (an active disclosure) from understanding (a cognitive achievement). It motivates the need to move beyond merely providing explanations to enabling user understanding as the true aim of XAI. Drawing on analytic philosophy, it frames understanding as relational and considers internalist and externalist perspectives. The authors propose that user understanding in XAI typically occurs via three modes—contrastive, functional, and transparency—and argue that shortcomings in each limit acceptance and trust. The paper’s purpose is to analyze these modes, identify dilemmas affecting user understanding and acceptance, and motivate a dialog-based approach to foster trust between AI creators and users.
Literature Review
The paper surveys XAI techniques and related literatures: transparency requirements, case-based and natural language explanations, and counterfactual approaches. It draws on philosophical work distinguishing explanation and understanding, and on debates about internalism vs externalism regarding understanding. Prior user-centric XAI frameworks aimed to bridge user explanations and AI but have under-addressed acceptance from both user and creator perspectives. The review highlights selective human explanation preferences, risks of misexplanations, and social-behavioral insights into explanation and trust, setting a foundation for analyzing three modes of understanding in XAI.
Methodology
This is a conceptual and normative analysis. The authors: (1) delineate three modes of understanding implied by XAI practices—contrastive, functional, and transparency; (2) evaluate each mode’s advantages and limitations for enabling user understanding and acceptance; and (3) synthesize insights into a proposed Ritual Dialog Framework (RDF) that reframes explainability as an ongoing, structured dialogic process aimed at cultivating trust. The approach integrates analytic philosophy of understanding, social-scientific insights on explanation and trust, and ethical-political considerations of technology transparency. No empirical studies were conducted.
Key Findings
- Contrastive understanding (e.g., counterfactual and case-based explanations) offers non-intrusive, post-hoc intelligibility without altering models and aligns with selective human explanation preferences. However, it is partial, struggles to reveal shared features across models, can shift responsibility to users, and risks relativism and lack of cross-framework verifiability. - Functional understanding clarifies design goals and effectiveness, enabling users to grasp purpose without deep technical knowledge and can be robust to external interference. Yet it may not map to feature-level or output explanations, can invite misexplanations, and leaves accountability diffuse; it may be biased by steady-state assumptions and reductionist simplification. - Transparency understanding aims for process openness and user rights but faces diverse user needs, dynamic exceptions, cognitive penetration and bias risks, and challenges in maintaining value alignment and achieving practical consensus about appropriate transparency levels. - Collectively, these modes reveal a dilemma: current XAI often fails to transform explanation into user understanding and trust, and can reallocate responsibility ambiguously. - The authors propose the Ritual Dialog Framework (RDF): a sociotechnical, dialog-centered approach that treats interactions as ongoing rituals emphasizing shared goals, turn-taking, feedback, value alignment, and narrative transparency by creators. RDF focuses on building trust as a precondition for acceptance, mediating between explanation and user trust. - Preconditions for effective RDF include shared objectives, mutual willingness to participate, capacity for ongoing dialog, and anticipatory/predictive engagement in discourse.
Discussion
The analysis shows that while contrastive, functional, and transparency modes each contribute to intelligibility, none reliably delivers the user’s cognitive achievement of understanding or the social trust necessary for acceptance. The proposed RDF addresses this by reframing XAI as a continuous, ritualized dialog between creators and users that centers on shared purposes, value alignment, and narrative disclosure. This sociotechnical perspective explains how XAI can move from information provision to trust-building, mitigating accountability diffusion and biases. The findings suggest that acceptance depends not only on technical explanation quality but also on institutionalized practices of dialog that acknowledge user vulnerability and foster reliable, user-centered trust.
Conclusion
The paper contributes a structured critique of three dominant understanding modes in XAI—contrastive, functional, and transparency—demonstrating their respective strengths and systemic limitations in achieving user understanding and acceptance. It advances the Ritual Dialog Framework as a practical philosophical proposal to socialize XAI through ongoing, value-aligned dialogic rituals that bridge creators and users, aiming to convert explanation into trust. Future work should operationalize RDF into concrete processes and tools, empirically evaluate its effectiveness across domains and user groups, develop metrics for trust and understanding, and integrate ethical assessment mechanisms to calibrate transparency, accountability, and value alignment within real-world AI lifecycles.
Limitations
The work is conceptual and does not present empirical validation or user studies. Its recommendations depend on contextual implementation and may face challenges in diverse domains and stakeholder settings (e.g., establishing shared goals, sustaining dialog, and aligning values). The feasibility of RDF, appropriate levels of transparency, and measures of trust and understanding require operationalization and empirical assessment. Additionally, the framework must contend with cognitive biases, public preconceptions, and potential accountability diffusion in practice.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny