Social Work
Liars know they are lying: differentiating disinformation from disagreement
S. Lewandowsky, U. K. H. Ecker, et al.
The paper addresses whether misinformation and willful disinformation can be reliably identified and distinguished from legitimate political disagreement, and why doing so matters for democracies. It situates the problem in the context of recent U.S. elections, highlighting how the 'big lie' about the 2020 presidential election illustrates deliberate attempts to mislead the public. The authors argue that disinformation undermines common knowledge necessary for democratic accountability, corrodes trust in institutions, and can inflame anti-democratic behaviors. They emphasize the audience’s role, including participatory propaganda and feedback loops between elites and the public, which amplify and entrench false beliefs. The purpose is to rebut political critiques that portray misinformation research as censorship and scholarly critiques that question the identifiability or prevalence of misinformation, while outlining evidence-based, rights-preserving countermeasures.
The authors synthesize research on the political and societal landscape of mis- and disinformation. They review evidence of ideological asymmetries in the U.S. showing conservatives/populist right are more likely to consume, share, and believe false information than liberals, driven partly by media ecosystem structures (e.g., dense right-wing clusters) and elite behavior (Benkler et al., 2018; González-Bailón et al., 2023; Lasser et al., 2022; Greene, 2024). Health domains show similar asymmetries: conservatism predicts susceptibility to health misinformation (Nan et al., 2022), and excess mortality among Republicans rose up to 43% post-vaccine availability (Wallace et al., 2023), with Fox News consumption causally linked to lower vaccination rates (Pinna et al., 2022) and an additional Trump-specific effect (Jung & Lee, 2023). The review notes caveats: asymmetries are not absolute, vary by topic and country, and are less evident among mainstream politicians in the UK and Germany (except the German far right). Beyond the U.S., the Global South exhibits distinct dynamics: encrypted messaging platforms, cyber-armies, and power asymmetries with multinational tech firms; notable harms include Indian WhatsApp rumors leading to mob lynchings. The paper also reviews politicization of misinformation research in the U.S., including congressional attacks alleging a 'Censorship Industrial Complex', platform retrenchment of moderation resources, and legal challenges, juxtaposed with evidence refuting claims of anti-conservative bias on platforms and highlighting algorithmic amplification and platform curation. Finally, the authors situate critiques of truth claims within postmodernist and contemporary political rhetoric, arguing for the necessity of distinguishing falsehoods from good-faith contestation.
This is a conceptual and integrative analysis rather than a primary empirical study. The authors employ: (1) narrative synthesis of interdisciplinary literature (political science, psychology, communication, law, and computational social science) on misinformation prevalence, effects, and detection; (2) comparative case analyses to differentiate legitimate disagreement from misinformation (e.g., COMPAS algorithm fairness trade-offs; COVID-19 vaccine prioritization between essential workers vs elderly), showing how value-laden policy debates can be resolved without resorting to falsehoods; (3) evidentiary review of empirical methods for identifying misinformation and deceptive intent, including: crowdsourced fact-checking accuracy and constraints; linguistic and emotional 'fingerprints' of misinformation; machine-learning classifiers for deception detection and content credibility; (4) legal and documentary evidence to infer intentionality (e.g., discrepancies between internal corporate documents and public statements in tobacco and fossil-fuel industries; contradictions between legal pleadings and public claims in election litigation; media discovery in defamation suits). Collectively, these approaches illustrate operational pathways for identification of misinformation and willful disinformation.
- Misinformation is identifiable: linguistic/emotional cues differentiate false from reliable content; misinformation tends to be less cognitively complex and more negative/affective. Computational models combining such cues can classify deceptive news-like content with over 83% accuracy; across 81 ML deception-detection studies, many exceed 80–90% accuracy.
- Crowdsourced judgments can effectively identify misinformation at scale, with high correspondence to professional ratings and up to 97% accuracy in some community fact-checking of COVID-19 content, though political balance and selection issues matter.
- Willful disinformation (intent) can be inferred via: (a) NLP-based deception patterns (e.g., classifiers exceeding 90% accuracy on Trump tweets labeled by independent fact-checks); (b) internal document vs public messaging discrepancies (tobacco’s denial of known harms; ExxonMobil’s internal climate projections vs public doubt campaigns); (c) legal/forensic contrasts (e.g., Trump lawyers disclaim fraud in court while alleging it publicly; sanctions of attorneys; Sidney Powell’s guilty plea; Giuliani’s concessions; Fox News’ $787.5M Dominion settlement after discovery showed executives/hosts knew claims were false).
- Political asymmetry in the U.S.: conservatives/populist right consume/share more low-quality or false content; Facebook news exposure analyses show a conservative-only ecosystem segment harboring most misinformation; Republican politicians share more low-quality sources on social media than Democrats.
- Health consequences: After vaccines became widely available, excess death rates among Republicans were up to 43% higher than among Democrats in Ohio/Florida; Fox News consumption causally linked to lower vaccination; a distinct Trump effect further reduced vaccination.
- Public belief anchoring: By Aug 2023 nearly 70% of Republican voters questioned the 2020 election’s legitimacy; many believed there was solid evidence despite repeated debunking; belief levels remained high even under conditions reducing expressive responding.
- Platform dynamics: Claims of anti-conservative bias are not supported by engagement and amplification data; algorithms amplify captivating/negative content, shaping information diets and potentially incentivizing low-quality, outrage-evoking material.
- Legitimate contestation vs misinformation: Complex policy disputes (e.g., COMPAS fairness metrics; vaccine allocation equity vs mortality benefits) can be adjudicated through values and evidence without resorting to falsehoods, underscoring that disagreement need not entail misinformation.
The analysis demonstrates that misinformation and willful disinformation are empirically distinguishable from good-faith disagreement, addressing critiques that deny the feasibility or utility of identification. By integrating evidence from computational linguistics, crowdsourcing, legal records, and internal corporate documents, the authors show multiple convergent pathways to infer both falsity and deceptive intent. This matters for democratic health: entrenched false beliefs about elections and public health corrode trust, fuel anti-democratic behavior, and cause tangible harm (e.g., excess mortality). The observed U.S. asymmetry in misinformation exposure and dissemination highlights the role of media ecosystems and elite rhetoric in shaping belief formation. Distinguishing legitimate normative trade-offs (e.g., fairness vs efficiency) from factually false narratives preserves the space for democratic deliberation while justifying targeted interventions against disinformation. The findings argue that protecting free expression need not preclude platform-level, rights-preserving measures—such as transparency, friction, credibility cues, and user education—that improve truth discernment and reduce the spread and impact of disinformation.
The paper contends that democracy depends on shared factual baselines and that willful disinformation undermines this foundation. It rebuts claims that misinformation research is censorship or that misinformation is too elusive to identify. Evidence shows both misinformation and deceptive intent can often be detected by humans and machines, and that disinformation is distinct from good-faith debate. The authors recommend rights-preserving, scalable interventions: transparent moderation policies; credibility 'nutrition labels' and friction to slow sharing; critical ignoring; and psychological inoculation/prebunking via scalable educational content. They note European regulatory advances (e.g., DSA, strengthened Code of Practice) and platform-focused recommendations. Future research should examine long-term cognitive and societal consequences of sustained exposure to low-quality information, generalizability across contexts (especially the Global South), and limitations of fact-checking, including challenges with future-oriented claims and paltering. Enhancing discernment—believing accurate information more than misinformation—should be a central goal to counter cynicism and preserve democratic decision-making.
- The evidence of political asymmetry is contextual: not absolute, topic-dependent, and may not generalize outside the contemporary U.S. media/political environment; mainstream politicians in other democracies often show more balanced sharing of quality sources.
- The study is conceptual/synthetic rather than a primary empirical investigation; it relies on prior literature, case studies, and legal/documentary records.
- Fact-checking and detection face challenges: verifying future-oriented claims; paltering (truthful statements used to mislead); potential dataset biases; and varying effectiveness across cultural contexts.
- Attribution of broad societal trends to misinformation is difficult; while some causal links have been identified (e.g., media effects on health behaviors), generalizability remains uncertain.
- Platform and legal landscapes are evolving (e.g., moderation resource cuts, court rulings), which may affect the applicability and impact of proposed interventions, especially in the Global South where resources and language coverage are limited.
Related Publications
Explore these studies to deepen your understanding of the subject.

