logo
ResearchBunny Logo
Social corrections act as a double-edged sword by reducing the perceived accuracy of false and real news in the UK, Germany, and Italy

Political Science

Social corrections act as a double-edged sword by reducing the perceived accuracy of false and real news in the UK, Germany, and Italy

F. Stoeckel, S. Stöckli, et al.

Discover how social corrections influence the way we perceive and engage with news on social media. This pre-registered study involving 6,621 participants across the UK, Italy, and Germany reveals surprising insights into the effects of flagging true and false news, conducted by authors Florian Stoeckel, Sabrina Stöckli, Besir Ceka, Chiara Ricchi, Ben Lyons, and Jason Reifler.

00:00
00:00
~3 min • Beginner • English
Introduction
The study investigates whether and how social corrections—comments by ordinary users that label posts as inaccurate—affect perceptions of accuracy and engagement with both false and true news on social media across three European countries (UK, Italy, Germany). Key questions include generalizability beyond US/health contexts; whether format and strength (amplification) of corrections matter; whether miscorrections (flagging true news as false) similarly influence perceptions; and whether individual differences (anti-expert sentiments, cognitive reflection, susceptibility to social influence) moderate effects. The authors hypothesize that social corrections will reduce perceived accuracy of false news (HCorrect false) and that stronger amplification (e.g., more likes, multiple corrective comments, or a fact-checking link) will increase effects. They also hypothesize that miscorrections will reduce perceived accuracy of true news (HMiscorrect true) and that amplification will increase these effects.
Literature Review
Prior work shows that peer corrections on social media can reduce belief in and sharing of misinformation, especially in health contexts and primarily with US samples (e.g., Bode & Vraga; Walter et al.). Social commentary can shape beliefs even more than professional news sources, likely due to the salience of social cues online. However, generalizability across topics (e.g., political, climate, technology) and countries is less clear. Research varies in how it operationalizes social corrections—from minimal cues (emojis, single comments) to detailed comments with links. There is concern that miscorrections of true information can lower belief in accurate claims (Bode & Vraga). Theoretical perspectives suggest multiple potential mechanisms: central processing where argument strength matters, heuristic reliance where simple cues and social endorsements influence judgments, and normative conformity pressures. Individual differences (anti-expert sentiment, cognitive reflection, susceptibility to social influence) have been proposed as moderators, but evidence is mixed. The authors aim to extend this literature by testing effects across countries, topics, formats, and by examining miscorrections and potential moderators.
Methodology
Design: Three pre-registered, online, within-subjects experiments conducted between July 2022 and February 2023 with Dynata samples in the UK (N=1,944), Italy (N=2,467), and Germany (N=2,210). Respondents first answered pre-treatment batteries (demographics; anti-expert sentiments; cognitive reflection test, CRT; susceptibility to social influence, SSI). Each participant then evaluated nine simulated social media posts (Facebook, Twitter/X, Instagram), including both false and true news across topics (health, climate, technology, migration, etc.). Sequence ensured exposure to both veracities (country-specific ordering). Stimuli: Posts were adapted from real-world content and rendered with a social media post simulator. False and true posts were paired with either no comments (control) or user comments implementing correction/miscorrection treatments. Conditions varied by country but included: (1) low amplification (e.g., a single corrective/miscorrective comment with few likes or paired with one supporting comment), (2) high amplification (e.g., many likes or multiple corrective/miscorrective comments with supporting comments), and (3) correction/miscorrection with link (a corrective or miscorrective comment containing a link to a fact-checking/bolstering website). The UK operationalized amplification with number of likes; Italy and Germany with number of corrective/miscorrective comments; links pointed to fact-checking debunks for false posts and to sites bolstering miscorrections for true posts. Germany additionally tested source cues (media outlet logo) in the original post for true news. Measures: After each post, respondents reported (1) perceived accuracy (1–4 scale), (2) likelihood to like, and (3) likelihood to share (both 1–5). Moderators: anti-expert sentiments (3 items; α≈0.68–0.76), CRT (4 items; sum correct), SSI (7 items; α≈0.94–0.95). A manipulation check confirmed perceived differences between low vs high amplification (e.g., 10 vs 184 likes) in the UK. Analysis: Pre-registered linear mixed-effects models estimated treatment effects with random intercepts for respondents and posts (for true news in the UK, post-level random intercept omitted since all saw the same three true posts). Main specification: outcome ~ treatment + covariates (gender, age, education) + (1|respondent) + (1|post). Interaction models tested moderation by anti-expert sentiments, CRT, and SSI. Robustness checks excluded respondents failing attention checks and controlled for congeniality; results were substantively similar. Ethics approval was obtained (University of Exeter), GDPR compliance assured, consent collected. Materials, data, code, and preregistrations are available on OSF (UK: https://osf.io/fpm2e; materials/data/code: https://osf.io/4hjcf; Italy prereg: https://osf.io/upzm8; materials/data: https://osf.io/yvdj4; Germany prereg: https://osf.io/rfq6h; materials/data: https://osf.io/jhwfg).
Key Findings
- Social corrections reduce perceived accuracy of false news across countries. UK: low amplification B = −0.10 (SE 0.02), high −0.13 (0.02), link −0.11 (0.02), all p < 0.001. Germany: low −0.10 (0.03), high −0.14 (0.03), link −0.16 (0.03), all p < 0.001. Italy: high −0.12 (0.02), link −0.12 (0.02), p < 0.001; low amplification not significant (p = 0.07). - Engagement metrics mirror accuracy effects for false news: likelihood to like and share decreased under corrective conditions in all three countries. - Social miscorrections reduce perceived accuracy of true news. UK: high amplification B = −0.17 (0.03), p < 0.001; link −0.10 (0.03), p = 0.001; low amplification not significant (p = 0.069). Italy: low −0.08 (0.02), high −0.12 (0.02), link −0.07 (0.02), all p ≤ 0.001. Germany: high without source cue −0.14 (0.02), with cue −0.10 (0.02), link −0.13 (0.02), all p < 0.001. - Engagement with true news also declined under miscorrections (liking and sharing outcomes showed similar patterns). - No consistent evidence that stronger amplification (more likes, multiple corrective comments, or fact-checking links) yields significantly larger effects; pairwise differences across correction formats were not consistently significant. - No evidence of moderation by anti-expert sentiments, cognitive reflection (CRT), or susceptibility to social influence (SSI) on perceived accuracy effects. - Effects varied little across post topics, and pooled analyses did not show reliable differences by veracity (true vs false) in treatment effects.
Discussion
The findings confirm that user-generated corrective comments effectively reduce both perceived accuracy and engagement with false news across diverse topics and three European countries, supporting the generalizability of social corrections beyond prior US/health-focused work. However, the same mechanism can be detrimental when applied to accurate information: miscorrections significantly lower perceived accuracy and engagement with true news, highlighting a double-edged sword. The absence of consistent amplification advantages suggests that simple corrective cues can be as effective as more elaborate ones (e.g., with links), lowering the barrier for participation but also increasing the risk of harm when users falsely challenge accurate posts. The lack of moderation by anti-expert sentiment, CRT, or SSI suggests that effects are broadly parallel across audiences and may be driven less by deep argument scrutiny or normative compliance and more by the salience of follow-up negations and recency effects in social media contexts. These results have practical implications for platform design and user education, underscoring the need to consider both the benefits and risks of encouraging peer corrections.
Conclusion
Across three pre-registered experiments in the UK, Italy, and Germany, social corrections reliably reduced perceived accuracy and engagement with false news. These effects did not depend systematically on the strength or format of the correction and were not moderated by anti-expert sentiments, cognitive reflection, or susceptibility to social influence. Conversely, miscorrections applied to true news also reduced perceived accuracy and engagement, indicating potential harm from erroneous corrective cues. The study contributes cross-national evidence on the effectiveness and risks of social corrections, suggesting that simple cues suffice to influence judgments. Future research should test these dynamics in field or large-scale observational settings, further unpack mechanisms and boundary conditions, and explore how to mobilize accurate corrective behavior while minimizing miscorrections.
Limitations
- Outcomes are self-reported in an experimental setting, which may limit ecological validity relative to in-platform behavior; ethical and practical constraints precluded running experiments directly on social platforms. - The design asked about both accuracy and sharing intentions, and prior work suggests this could interfere with truth discernment; the most externally valid measurement approach remains unresolved. - While multiple topics were included, the stimuli and operationalizations, and the sequence constraints, may still limit generalizability to real-world feeds and interactions. - Mechanisms were inferred from patterns (e.g., lack of moderation and amplification effects); direct tests of processing routes were not conducted. - Mobilization strategies for encouraging accurate social corrections without increasing miscorrections remain untested.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny