logo
ResearchBunny Logo
Can digital tools foster ethical deliberation?

Health and Fitness

Can digital tools foster ethical deliberation?

J. Sleigh, S. Hubbs, et al.

This study systematically maps and categorizes digital tools for ethical deliberation, shedding light on key features that enhance ethical decision-making. Conducted by Joanna Sleigh, Shannon Hubbs, Alessandro Blasimme, and Effy Vayena at ETH Zürich, this research offers valuable insights for both developers and users of these essential tools.... show more
Introduction

The paper examines the emerging landscape of digital tools explicitly designed to support human ethical deliberation. In a context where AI and ML tools raise new ethical challenges, the study comparatively analyses computerized tools that aid end-users in identifying ethical issues, exploring options, and weighing solutions. The purpose is to map mechanisms used by these tools and how they are validated, highlighting potential benefits and risks for individuals and society and providing a resource for ethicists, educators, government organizations, and private institutions. The research questions are: R1: What mechanisms (e.g., checklists or scenarios) are used by digital ethics tools to promote ethical deliberation? R2: Do these digital ethics tools provide evidence of effectiveness, specifically in terms of ethical soundness, quality of the ethical deliberation process, or achievement of intended outcomes?

Literature Review

The background situates ethical deliberation as a reflective process, individual or collective, involving recognition of a moral problem, imagining options, and evaluating solutions (drawing on Dewey’s pragmatist ethics and Aristotle). Ethical deliberation tools are described as human-centric heuristic aids that bolster autonomy rather than automate decisions. The literature distinguishes deliberative tools from purely informational simulations or algorithmic decision-makers. It addresses boundary challenges, noting that non-ethics-labeled media (e.g., games) can foster ethical reflection; gamification may enhance engagement and insight but raises concerns about persuasion and nudging. Technology can shape moral responses and emotions influence decisions, prompting inquiry into which mechanisms aid deliberation. Evaluation approaches in prior work include: (a) ethical soundness (alignment with theories/principles), (b) quality of deliberation (process/usability), and (c) outcomes (changes in learning or ethical reasoning). Despite scattered studies of individual tools, a comprehensive overview of digital tools for ethical deliberation was lacking, motivating this systematic mapping.

Methodology

Design: Systematic mapping review (distinct from systematic review) to collate, describe, and catalogue evidence across diverse sources and identify research gaps and the presence/absence of evaluation/validation research. Protocol: Six stages: (1) define scope and research questions; (2) execute searches with a predefined strategy; (3) screen for eligibility; (4) code and conduct faceted analysis; (5) critical appraisal of overall validity of the evidence base and subsets; (6) description, visualization, and reporting. Information sources and search strategy: Informed by protocols for web-based resource identification and grey literature search. Sources included: (1) grey literature databases (PubMed, Scopus, IEEE Xplore) using keywords: “digital”, “ethics”, “decision”, “deliberation”, “tool” (adding “decision” due to sparse non-academic usage of “deliberation”); (2) digital libraries (IOS App Store, Google Play Store, Chrome Web Store, Microsoft Edge Add-ons, GitHub); (3) Google web search using private browsing with cleared cookies/history; (4) expert consultation, documenting contacts and recommendations in Excel. Flow: 853 total records identified; 654 screened; 199 duplicates; 367 digital tools identified; 341 excluded (inaccessibility, non-digital nature, lack of ethical deliberation focus); 26 tools included. Eligibility criteria: Inclusion required: (a) explicit intent to facilitate ethical deliberation with user participation; (b) digital/electronic interactive program, app, or software; (c) accessible online/downloadable, open access, in English. Exclusion: not directly ethics-related; not intended for ethical deliberation; elevates tool to decision authority or lacks user involvement; lacks moral/ethical reflection (purely data/assessment or prescriptive standards without critical reflection); non-digital/static formats (e.g., PDFs, images); lacks interactive UI or only data collection; paywalled, unavailable, incomplete/prototype, or non-English. Data extraction and coding: Directed content analysis. Tools were coded against the three moments of deliberation (recognize/understand, imagine options, assess/decide). Descriptive attributes coded inductively: technology type, author, publication date, target audience, topic area, individual or group use. Mechanisms of ethical deliberation coded inductively (e.g., question prompts, visualization, resources, scenarios, feedback mechanisms, checklists, gamification, discussion forums, AI). For evidence of effectiveness, a three-step approach: (1) content analysis of tool websites/materials for evaluations; (2) Google Scholar search (Incognito); (3) developer outreach for additional information. Coding disagreements were discussed and categories refined until agreement.

Key Findings

Sample characteristics (n=26 tools; 2010–2023):

  • Technology: 81% web-based (n=21).
  • Use mode: 73% intended for individual use (n=19); few enabled group use; only two used discussion forums.
  • Authorship: 54% universities (n=14); 15% EU Horizon 2020 (n=4).
  • Intended audiences: developers 31% (n=8), academics 23% (n=6), broad audiences 19% (n=5).
  • Topic domains: data usage 50% (n=13); technology development 42% (n=11); philosophy/ethics 42% (n=11); research integrity 27% (n=7); health 19% (n=5). Deliberative mechanisms:
  • Question prompts used by all tools (100%, n=26) to stimulate and structure reflection (e.g., RRI Self-reflection Tool; Fairness Compass with prompts plus feedback/reporting).
  • Visualizations employed by 73% (n=19) for structure and engagement (e.g., Ethical Stack; Trolley Game illustrations).
  • Feedback mechanisms (n=14), resources/information (n=14), scenarios/case studies (n=11), and gamification (n=6) were frequent; combination examples include Dilemma Game (scenarios, voting feedback, expert resources, group play mechanics).
  • AI used as a mechanism in one tool: EDEN employs GPT-4-powered chatbots representing different ethical theories to deconstruct dilemmas. Evidence of effectiveness/validation:
  • Normative grounding predominated: 85% (n=22) referenced ethical theories/principles/frameworks (e.g., Felicific Calculator operationalizing Bentham’s utilitarian calculus).
  • Peer-reviewed evidence for 58% (n=15) of tools: 10 examined quality of deliberation/user experience; 8 assessed outcomes (e.g., ethical awareness/sensitivity); some covered both. Universities authored 11 of these 15 tools.
  • Examples: Ethxpert improved comprehension of ethical issues but less support for choosing actions; Quandary’s mixed-methods studies showed gains in fact–opinion comprehension, perspective-taking, teacher satisfaction, and student engagement among school students.
Discussion

The mapping shows that digitization augments rather than replaces conventional deliberation mechanisms (e.g., scenarios, checklists), enabling their integration with digital affordances like interactive visualizations, gamification, and feedback. Such combinations can enhance user engagement, structure, personalization, documentation, and accessibility, potentially supporting community-level understanding and shared values. However, quantification (e.g., percentages of agreement) may risk reductionism, bias, and undue persuasiveness—technologies of hubris—if lacking transparency about whose perspectives are represented. Tools should instead cultivate technologies of humility that acknowledge ambiguity and context. Most tools target individual use, reflecting device-centric design and challenges of facilitating high-quality collective online deliberation (e.g., moderation, participant diversity). The corpus includes both dilemma-based approaches (binary choices highlighting value conflicts, e.g., trolley problems) and problematic approaches (process-oriented exploration across a decision pathway, e.g., RRI Self-Reflection Tool), each with context-dependent strengths and trade-offs. AI-enabled deliberation appears nascent: EDEN’s multi-perspective chatbot model avoids prescribing a single answer and can support autonomy by juxtaposing theories, yet it relies on faithful translation of ethical doctrines, raising trust and integrity considerations. Overall, tools must balance principled guidance with preserving users’ autonomy and avoid techno-paternalism or nudging users toward predetermined values. The findings underscore the need for rigorous evaluation of deliberative processes and impacts, especially as AI components become more prevalent.

Conclusion

This study provides a comprehensive mapping and taxonomy of 26 digital tools for ethical deliberation (2010–2023), detailing their mechanisms and validation strategies. It highlights how digital affordances can structure reflective analysis, facilitate stakeholder engagement, and document decisions, informing practice across policy, law, education, business ethics, and research integrity. A central ongoing challenge is balancing the use of ethical theories for guidance with protecting user autonomy and avoiding techno-paternalism. Future research should systematically evaluate AI-driven deliberation tools (e.g., NLP/chatbots) across domains, assessing ethical soundness, process quality, and outcomes, and develop approaches that favor transparency, inclusivity, and humility in quantification and feedback.

Limitations
  • Dynamic landscape: the search may not capture all existing or emergent tools; tools may become obsolete over time.
  • Language and access: only English-language, open-access tools were included, limiting generalizability to other contexts and languages.
  • Scope and taxonomy: small sample size (n=26); the list of mechanisms is not exhaustive; the taxonomy is provisional and does not delve into sub-types or detailed design strategies.
  • Methodological bias: qualitative content analysis may introduce coder bias; mitigated by two independent reviewers and consensus-building.
  • Format exclusions: non-interactive digital formats (e.g., PDFs, static images) and prototypes were excluded, potentially omitting some relevant approaches.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny