logo
ResearchBunny Logo
A value-driven approach to addressing misinformation in social media

Interdisciplinary Studies

A value-driven approach to addressing misinformation in social media

N. Komendantova, L. Ekenberg, et al.

Explore a groundbreaking framework for assessing misinformation detection tools on social media, developed through discussions with policymakers, journalists, and citizens across Austria, Greece, and Sweden. Discover how trust, accountability, and cultural influences shape our understanding of misinformation, as researched by Nadejda Komendantova, Love Ekenberg, Mattias Svahn, Aron Larsson, Syed Iftikhar Hussain Shah, Myrsini Glinos, Vasilis Koulolias, and Mats Danielson.

00:00
00:00
Playback language: English
Introduction
The proliferation of misinformation on social media is a significant and complex policy challenge. While misinformation is not a new phenomenon, its spread has been amplified by the widespread adoption of social media. This study addresses the lack of robust scientific evidence on effective countermeasures to misinformation. Existing approaches often fail to adequately involve stakeholders in the design and development of tools to combat misinformation, viewing users as passive recipients rather than active participants. This research aims to overcome this limitation by employing a value-based software engineering approach that integrates the preferences and values of diverse stakeholder groups. The study investigates two key research questions: What are the preferences, perceptions, and views of stakeholders regarding features of misinformation-combatting tools? How do these preferences vary across different cultural backgrounds? The objective is to understand stakeholder preferences, analyze the influence of cultural background on these preferences, and generate recommendations for developing effective tools to address the spread of misinformation.
Literature Review
The paper explores the varied definitions of misinformation, disinformation, and fake news, highlighting distinctions between intentional deception (disinformation) and unintentional misrepresentation (misinformation). It reviews the impact of misinformation on public discourse and examines existing research on addressing the problem. This research encompasses various disciplines, including social sciences, psychology, and media studies. It addresses the challenges posed by the “wicked problem” nature of misinformation, where solutions often exacerbate the problem. The paper summarizes existing research on cognitive psychology's exploration of the efficacy of corrections and warnings concerning misinformation, highlighting challenges such as the backfire effect and the limitations of fact-checking approaches. Finally, the paper discusses several existing tools designed to combat misinformation, such as Botometer, Foller.me, and TinEye, among others. It critiques these tools, pointing to shortcomings such as limited stakeholder involvement in their development, technical limitations, and a lack of integration with fact-checkers' perspectives. The literature review underlines the need for a value-based approach that actively involves stakeholders in the design process, drawing on principles of participatory governance and value-based software engineering.
Methodology
The study employed a mixed-methods approach involving co-creation workshops and interviews. Stakeholders from three groups (journalists/fact-checkers, policymakers, and citizens) participated in workshops in Austria, Greece, and Sweden. The workshops aimed to elicit perceptions of misinformation, evaluate existing tools, and gather feedback on the desired features of new tools. A multi-criteria decision framework, implemented using the DecideIT 3.1 software, was used to analyze the preferences. The sampling process involved contacting organizations representing the stakeholder groups. The workshops followed a consistent format across all three countries to minimize bias. The workshops involved several sessions, including discussions, feature evaluations, and ranking exercises. Participants ranked features based on three criteria: trust, critical thinking, and transparency. A cardinal ranking approach (P-CAR) was used to represent ranking statements, enabling multi-attribute value aggregation across stakeholders. The DecideIT software facilitated the evaluation of the ranking data to determine the overall desirability of features. The methodology accounts for incomplete and imprecise data, enabling the analysis of preferences and the identification of robust results. The method also provides a measure of robustness to assess the sensitivity of the results to variations in input data.
Key Findings
The study revealed several key findings. First, participants across all stakeholder groups and countries demonstrated a preference for passive rather than active roles in combating misinformation. The most highly ranked features related to understanding the source, spread, and timeline of misinformation. Specifically, participants valued knowing "why and when" a claim was flagged, "how it spreads and by whom," and the "life cycle" of a misinformative post. This indicates a significant emphasis on tracing the origin and transmission of misinformation. Second, although participants wanted to be informed about misinformation, they did not prioritize features that required active engagement, such as self-notification or posting refutations. This passive approach may suggest a lack of perceived personal responsibility or a belief that dealing with misinformation is the task of others. Third, cultural context significantly impacted preferences. While the top three features were consistent across groups, the relative ranking of other features showed notable variations among Austria, Greece, and Sweden. For example, Austrian citizens prioritized credibility indicators more than those in Greece or Sweden, who favored features relating to the timeline and spread of misinformation. Fourth, there were differences in preferences across stakeholder groups within each country, although these differences were less pronounced than the differences across countries. The findings highlight the complex interplay between cultural background, individual attitudes, and preferences for specific tool features in the fight against misinformation. The findings confirm that the development of effective tools needs to consider these factors.
Discussion
The findings address the research questions by demonstrating clear preferences for features that facilitate understanding the origin and spread of misinformation, rather than active engagement in counteracting it. The emphasis on tracing the source and timeline aligns with the importance of context and coherence in assessing information veracity. The results highlight a crucial need to understand the reasons behind this preference for passive engagement. This understanding could inform the design of more effective tools by, for example, fostering a greater sense of personal responsibility among users or offering incentives for active participation. The observed cultural variations underscore the importance of tailoring tools to specific contexts, adapting features to resonate with local values and information-seeking behaviors. The results have significant implications for the development and deployment of tools aimed at combating misinformation. The findings emphasize the need for a more nuanced approach that goes beyond solely technological solutions and integrates broader societal considerations.
Conclusion
This study provides a valuable framework for evaluating misinformation detection tools using a value-driven approach that incorporates diverse stakeholder perspectives. The findings highlight the importance of considering both cultural context and user preferences in the design and development of such tools. Future research should investigate the reasons underlying the passive attitude towards active participation in countering misinformation and explore ways to design tools that promote more active engagement. Further research could also delve deeper into the influence of cultural factors on preferences for specific features, exploring the link between cultural values and information seeking behaviors. Developing culturally sensitive tools is essential for effective misinformation mitigation.
Limitations
The study's limitations include its focus on three European countries, potentially limiting the generalizability of the findings to other cultural contexts. The specific stakeholder groups included may not fully represent the diversity of opinions within each country. The reliance on self-reported preferences might be influenced by biases, and the use of a specific decision-making software may introduce biases related to the tool's features and functionality. Future research could address these limitations by expanding geographical scope, including a broader range of stakeholders, and exploring alternative methods for eliciting preferences.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny