logo
ResearchBunny Logo
A value-driven approach to addressing misinformation in social media

Interdisciplinary Studies

A value-driven approach to addressing misinformation in social media

N. Komendantova, L. Ekenberg, et al.

Explore a groundbreaking framework for assessing misinformation detection tools on social media, developed through discussions with policymakers, journalists, and citizens across Austria, Greece, and Sweden. Discover how trust, accountability, and cultural influences shape our understanding of misinformation, as researched by Nadejda Komendantova, Love Ekenberg, Mattias Svahn, Aron Larsson, Syed Iftikhar Hussain Shah, Myrsini Glinos, Vasilis Koulolias, and Mats Danielson.

00:00
00:00
~3 min • Beginner • English
Introduction
The study addresses the rapidly evolving challenge of misinformation on social media, a phenomenon amplified by widespread internet access and platform dynamics. Building on prior work in psychology, journalism, and information science, the authors note limited evidence on effective countermeasures and the need for stakeholder-inclusive approaches to tool design. The research is guided by two questions: (1) What are preferences for, perceptions of, and views of the features of tools for dealing with misinformation? (2) How do these preferences depend on the cultural backgrounds of stakeholder groups and participants? The purpose is to elicit and analyse stakeholder preferences for features of misinformation-mitigating tools, assess cultural influences on these preferences, and derive recommendations for tool development. The importance lies in moving beyond top-down tool creation toward value-based, co-created solutions that can foster trust, accountability, and usability across diverse user groups.
Literature Review
Background literature distinguishes misinformation (misleading without intent to harm) from disinformation (deliberately deceptive) and outlines taxonomies spanning source, story, and context (Wardle, 2016; Wardle & Derakhshan, 2017; Burgoon et al., 2003; Farrel et al., 2018; Giglietto et al., 2016). Psychological research shows mixed efficacy of corrections and warnings: explicit, specific warnings reduce but do not eliminate misinformation’s influence (Ecker et al., 2010); repetition increases perceived accuracy (Pennycook et al., 2018); myth-versus-fact formats can inadvertently reinforce myths (Schwarz et al., 2016); and debunking can sometimes amplify misinformation effects (Chan et al., 2017). Cognitive limits and social network structures shape information diffusion (Lerman, 2016). The review also surveys existing counter-misinformation tools (e.g., Botometer, Foller.me, TinEye, Rbutr, Fakespot, NewsGuard, Greek Hoaxes Detector, DejaVu, Social Sensor), noting their purposes and common limitations: limited user participation in design, narrow stakeholder involvement, browser support constraints, and insufficient integration of fact-checkers’ practices. These issues reduce transparency and user trust, warranting evaluation in collaborative settings. Participatory governance and value-based software engineering (VBSE) are positioned as promising approaches to incorporate stakeholder values into tool design. Value-oriented prioritization (VOP) can elicit user value-in-use but benefits from more flexible decision-analytic methods for aggregating multi-stakeholder preferences.
Methodology
The study employed a co-creation process using workshops and interviews to elicit stakeholder preferences and evaluate tool features via a multi-criteria decision analytic framework. Three rounds of workshops were conducted: (1) Tokyo, Sept 2018 (multi-stakeholder, 103 participants including 11 government CIOs, 65 public officials, 8 journalists, 8 international organization executives, 9 private sector executives, 2 policymakers) to assess societal impacts of misinformation and mitigation strategies; (2) Country workshops in Austria (Vienna), Sweden (Stockholm), and Greece (Athens), Feb–Mar 2019, to assess needs, trust in sources, perceptions, and policy/tool recommendations (Vienna: 21 mixed stakeholders; Sweden: 16—4 journalists, 5 policymakers, 7 citizens; Greece: 31—9 journalists, 9 policymakers, 13 citizens); (3) Country workshops in Nov 2019 to rank and discuss tool features and conduct Multi-Criteria Decision Analysis (MCDA) and focus groups (Sweden: 15—3 journalists, 1 policymaker, 11 citizens; Greece: 19—6 citizens, 7 journalists, 6 policymakers; Austria: 16—5 citizens, 6 journalists, 5 policymakers). Recruitment used standardized desktop research, invitations, follow-ups, and reminders; workshop formats and protocols were harmonized across countries. Feature set evaluated included: Awareness (F1); Why and when a claim is flagged (F2); How it spreads and by whom (F3a); Life cycle/timeline (F3b); Sharing over time—account score (F4a); How misinformative an item is—item score (F4b); Instant feedback on arrival (F5a); Inform on consistent misinformative accounts (F5b); Self-notification on repeated sharing (F5c); Credibility indicators (F6); Post support or refute (F8); Tag veracity (F9); Platform feedback (F10). Participants produced three rankings of features under criteria Trust, Critical thinking, and Transparency, then ranked the criteria by relative importance. Elicitation and evaluation used DecideIT 3.1 with a rank-based approach (P-CAR) to transform rankings into calibrated surrogate imprecise value statements represented as linear inequalities with interval bounds and focal points, enabling conventional multi-attribute value aggregation under severe uncertainty. The evaluation considered the full range of output values and computed the proportion of feasible information where one feature outranks another, providing robustness measures. Aggregation across multiple stakeholders and criteria generated overall feature desirability and sensitivity indicators.
Key Findings
- Across all stakeholders and countries, participants prioritized information about the actors and dynamics behind misinformation: Feature 2 (Why and when) was most valued; Features 3a (How it spreads and by whom) and 3b (Life cycle/timeline) were essentially equal and next in preference. These results were robust for F2 and showed small differences between F3a and F3b. - Features requiring active user contributions ranked lower. Low-priority examples included Feature 5c (Self-notification of repeatedly sharing misinformation) and Feature 6 (Credibility indicators) in the aggregate analysis, indicating a generally passive stance toward intervention beyond being informed. - Roundtable discussions (Table 1) showed cross-country consensus on the need for collaboration among stakeholders and tools that support education and awareness. An automatic correction mechanism for validating information was the least desired option. - Citizens (Table 2; Fig. 2): Preferences varied by country. Vienna citizens favored Feature 6 (Credibility indicators) and Awareness, while Athens ranked Credibility indicators lowest. Stockholm and Athens citizens preferred Feature 3b (Life cycle) and Feature 4a (Sharing over time). Aggregated across countries, citizens most preferred Features 3b and 9 (Tag veracity), followed by Feature 2. - Journalists (Table 3; Fig. 3): Highest ranked features were Feature 2 (Why and when) and Feature 3a (How it spreads and by whom) in Stockholm and Athens, aligning with overall results. Vienna journalists differed, placing higher value on Feature 4b (How misinformative an item is), Feature 5c (Self-notification), and Feature 6 (Credibility indicators), while ranking Why/when and spread lower than the other countries. - Policymakers (Table 4; Fig. 4): No analyzable Stockholm data (only one policymaker participated). In Athens, policymakers prioritized Feature 3b (Life cycle) and Credibility indicators; in Vienna, they prioritized Feature 2 (Why and when), and also valued features like Post support/refute and Tag veracity. Aggregated most preferred features included Feature 2 and Feature 4a (Sharing over time). - Country-level differences (Table 5): Austria’s stakeholders favored Credibility indicators (and Awareness); Greece favored Life cycle/timeline (and related features like Sharing over time); Sweden favored Why and when and How it spreads and by whom (and also Life cycle and Inform on consistent accounts among citizens). - Overall, participants valued insight into timing, spread, and provenance of claims (when, who, how), reflecting emphasis on trust, accountability, and quality in information ecosystems and journalism.
Discussion
Findings answer the research questions by identifying concrete stakeholder preferences for tool features and highlighting cultural influences on these preferences. Participants consistently valued transparency about why/when items are flagged, who flagged them, how misinformation spreads, and a post’s life cycle—information that supports trust and accountability assessments. Conversely, features requiring active user intervention (e.g., self-notification, posting rebuttals) were de-emphasized, indicating a preference for passive, informative support rather than participatory correction. Cross-country comparisons suggest cultural context shapes feature priorities more than stakeholder role, implying that tool adoption and efficacy may depend on tailoring to local norms and expectations. The emphasis on spread dynamics aligns with Allport and Postman’s rumour framework and with truth-assessment heuristics: participants sought coherence, credible sources, and social consensus indicators (e.g., number of fact-checkers). Practically, results recommend that tool designers prioritize provenance, spread analysis, and timeline features; improve transparency of detection and labeling; and integrate tools with broader interventions (awareness campaigns, media literacy) to overcome passivity and support active reasoning. Flexibility is critical, as a single, context-independent tool is unlikely to satisfy diverse preference structures.
Conclusion
The paper introduces and demonstrates a value-driven, decision-analytic framework for eliciting and aggregating stakeholder preferences to inform the design of misinformation-mitigating tools. Applying multi-criteria decision analysis (DecideIT with P-CAR) across workshops in Austria, Greece, and Sweden, the study shows participants prioritize features that expose why/when items are flagged, how misinformation spreads, and a post’s life cycle, while features requiring active user involvement are less desired. The work contributes a co-creation methodology that handles incomplete and rank-based inputs, supports robustness analysis, and facilitates stakeholder communication and negotiation. Recommendations include building flexible, transparent tools; complementing automation with societal awareness campaigns and media/news literacy; fostering cross-sector teams; and considering regulatory measures to increase platform transparency. Future research should examine causal cultural factors behind differing preferences and investigate interventions to shift users from passive information consumption toward active correction behaviors, as well as explore integration of preferred features from existing tools with newly prioritized functionalities.
Limitations
- Scope and cultural causality: It was beyond the study’s scope to explain why cultural differences emerged or how specific cultural factors influenced preferences. - Geographic and sample coverage: Workshops were limited to three European countries (north/mid/south) and could not include every country; generalization beyond Europe is uncertain. - Stakeholder representation: Policymaker data were incomplete in Sweden (only one policymaker participated, yielding no analyzable results). - Feature focus: The third workshop did not specifically elicit preferences for features of existing tools, though participants expressed desires to include key features from such tools. - Participant stance and evidence sensitivity: Preference for passive features may be influenced by limited user familiarity with misinformation tools; some results showed sensitivity, as indicated by robustness analyses. - Data access: Individual-level elicitation data cannot be shared publicly for privacy reasons, limiting external verification of detailed inputs.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny