
Political Science
Disinformation on the COVID-19 pandemic and the Russia-Ukraine War: Two sides of the same coin?
R. S. D. Vas and J. T. Navarro
Explore how disinformation evolved during the COVID-19 pandemic and the Russia-Ukraine war in Europe, as revealed by researchers Rocío Sánchez del Vas and Jorge Tuñón Navarro. This study uncovers startling patterns in hoax frequency and dissemination methods that aim to stir emotions and divide audiences.
~3 min • Beginner • English
Introduction
The study addresses how disinformation surged around two major 21st-century European events—the COVID-19 pandemic and Russia’s invasion of Ukraine—and examines whether these waves of falsehoods share common patterns or differ across frequency, format, typology, platforms, and purpose. Against a backdrop of hybrid media systems, social media-centered news consumption, and rising polarization, the authors situate the research within the broader discussion of information disorders (disinformation, misinformation, propaganda, fake news, post-truth). They highlight how crises escalate information demand and create vacuums exploitable by disinformers. The research aims to compare disinformation related to COVID-19 (2020–2021) and the Russia–Ukraine war (2022–2023) across Spain, Germany, the UK, and Poland. It tests three hypotheses: (I) peaks in hoax dissemination at onset, with a faster decline for the war due to less direct impact on audiences; (II) textual fabrications dominating during the pandemic versus more out-of-context images during the war due to language/distance; (III) social networks as the main channels in both cases, with disinformation encouraging polarization and targeting a common enemy.
Literature Review
The paper synthesizes conceptual and empirical literature on information disorders and the hybrid media environment. It reviews post-truth as a context where emotional appeals outweigh facts, and distinguishes disinformation (deliberate falsehoods), misinformation (unintentional inaccuracies), and propaganda (manipulative persuasion, sometimes conflated with disinformation by institutions). Prior studies during COVID-19 documented the pandemic’s profound media-system effects, heightened uncertainty, and altered information behaviors, with social media serving as a viral arena for hoaxes. Research cited shows platforms like Facebook, Twitter (X), and TikTok facilitate rapid diffusion of falsehoods via algorithmic dynamics, echo chambers, and emotionally salient content. For the Russia–Ukraine war, literature frames the conflict as an example of hybrid warfare and a continuation of earlier Russian information operations, with increased reliance on social media for real-time, citizen-driven narratives. Studies on both events show disinformation spreads faster than truth, often leveraging visual content taken out of context, and that fact-checking networks have coordinated to counter hoaxes. The review also notes platform–fact-checker collaborations and cautions about potential biases toward public, monitorable networks versus private channels.
Methodology
Design: Mixed methods with methodological triangulation combining quantitative content analysis and qualitative semi-structured interviews, within a comparative “most similar systems design” (MSSD). Units and sample: Fact-checks (debunking publications) from eight European fact-checkers across four countries: Spain (Newtral, Maldito Bulo), Germany (CORRECTIV Faktencheck, BR24 Faktenfuchs), United Kingdom (Full Fact, Reuters Fact Check), Poland (Demagog, FakenewsPL). Time frame: March of 2020, 2021 (pandemic) and March of 2022, 2023 (Russia–Ukraine war). Only verifications related to the specified topics were included. Total N=812 verifications (pandemic 2020–2021: 515; war 2022–2023: 297). Variables: V1 frequency of fact-checks; V2 format (text, image, video, audio, combined: text+image, text+video, text+audio); V3 hoax typology (Wardle, 2017): fabricated, manipulated, impostor, false context, misleading, false connection; V4 platform (social networks, blogs, media); V5 purpose (poor journalism, parody, provocation, economic gain, empowering a common enemy, political power/influence). Coding procedure: Two coders developed categories after piloting on 10% of the sample; conducted parallel coding, resolving ambiguities by consensus. Inter-coder agreement reached 92%. A final joint review ensured consistency. Interviews: Eight semi-structured online interviews (Google Meets), 30–45 minutes, anonymized. Three profiles: (1) fact-checking/disinformation/media experts; (2) digital platforms/AI and SNA experts; (3) European disinformation and regulation experts. Tailored scripts (10–12 questions) aligned with hypotheses explored platform dynamics, algorithmic facilitation, narrative similarities/differences, regulatory context, and future trends.
Key Findings
- Frequency: Peaks coincided with crisis onsets: March 2020 accounted for 41% of all analyzed hoaxes; March 2022 for 32%. Country outputs (total N=812): Spain N=390 (48%), UK N=188 (23%), Poland N=136 (17%), Germany N=98 (12%). Country-month highlights: Spain—239 (Mar 2020), 54 (Mar 2021), 89 (Mar 2022), 8 (Mar 2023); Germany—26 (Mar 2020), 39 (Mar 2021), 25 (Mar 2022), 8 (Mar 2023); UK—33 (Mar 2020), 67 (Mar 2021), 76 (Mar 2022), 12 (Mar 2023); Poland—33 (Mar 2020), 24 (Mar 2021), 66 (Mar 2022), 13 (Mar 2023). Interest and verification volumes declined more sharply for the war than for the pandemic, reflecting indirect impact and audience fatigue.
- Format: Pandemic (2020–2021) hoaxes were predominantly text (45%). War (2022–2023) hoaxes were predominantly combined text+image (71%). Country specifics: Spain pandemic—text 38%, image 25%, combined 20%, video 9%, audio 7; war—combined 67%, image 18%, video 8%, text 7%. Germany pandemic—text 48%, video 22%, combined 12%, image 9%, audio 9%; war—combined 55%, image 24%, video 15%, audio 6%. UK pandemic—text 52%, combined 17%, image 14%, video 14%, audio 2%, unspecified 1%; war—combined 84%, video 8%, text 6%, image 2%. Poland pandemic—text 63%, video 16%, combined 11%, image 9%, audio 2%; war—combined 67%, text 16%, video 9%, image 8%.
- Typology: Pandemic hoaxes were mainly fabricated content (47% overall), with manipulated (17%), misleading (16%), false context (13%) also present. War hoaxes were dominated by false context (44%), linked to visual miscontextualization. Country examples: Spain pandemic—fabricated 52%, manipulated 19%, false context 12%, impostor 8%, misleading 5%, false connection 5%; war—false context 54%, fabricated 24%, manipulated 11%, impostor 5%, misleading 5%, false connection 1%. Germany pandemic—fabricated 38%, misleading 35%, manipulated 11%, false context 11%, impostor 3%, false connection 2%; war—false context 36%, fabricated 30%, manipulated 21%, impostor 6%, misleading 6%. UK pandemic—fabricated 45%, misleading 27%, false context 13%, manipulated 12%, impostor 3%; war—false context 55%, fabricated 19%, manipulated 13%, false connection 11%, impostor 2%. Poland pandemic—fabricated 47%, manipulated 21%, false context 19%, misleading 12%; war—fabricated 59%, false context 24%, misleading 8%, manipulated 5%, impostor 3%, false connection 1%.
- Platforms: Social networks dominated: pandemic 76% vs war 89% of cases; blogs and traditional media were marginal. Country breakdowns: Spain pandemic—social 77% (WhatsApp 41% of social; X 18%; Facebook 14%), blogs 7%, media 5%, unspecified 11%; war—social 79% (X 42%; Facebook 35%), blogs 2%, media 1%, unspecified 18%. Germany pandemic—social 72% (Facebook 50%; WhatsApp 23%), blogs 20%, media 6%, unspecified 2%; war—social 86% (Facebook 45%; X 23%), blogs 14%. UK pandemic—social 83% (Facebook 67%; X 13%), media 9%, blogs 4%, unspecified 4%; war—social 100% (Facebook 44%; X 46%). Poland pandemic—social 65% (Facebook 65%), blogs 25%, media 8%, unspecified 3%; war—social 92% (Facebook 61%; X 25%), blogs 5%, media 3%.
- Purpose: Provocation and empowering a common enemy dominated. Pandemic: provocation 42%, common enemy 32%. War: provocation 45%, common enemy 44%. Country snapshots: Spain pandemic—provocation 53%, common enemy 25%, economic gain 16%, parody 4%, political influence 1%, poor journalism 1%; war—provocation 48%, common enemy 36%, political influence 5%, parody 5%, economic gain 3%, poor journalism 2%. Germany pandemic—common enemy 35%, provocation 25%, economic gain 22%, political influence 15%, parody 2%, poor journalism 2%; war—provocation 45%, common enemy 45%, political influence 6%, parody 3%. UK pandemic—common enemy 48%, provocation 31%, economic gain 9%, poor journalism 5%, political influence 4%, parody 3%; war—provocation 53%, common enemy 33%, political influence 8%, parody 2%, poor journalism 2%, economic gain 1%. Poland pandemic—common enemy 32%, economic gain 32%, provocation 30%, political influence 4%, poor journalism 4%; war—common enemy 65%, provocation 32%, political influence 3%, parody 1%.
- Additional: Despite concerns, the sample showed no substantial evidence of AI-driven deepfakes. Interview insights emphasized algorithmic amplification, platform affordances, and cross-topic migration of denialist communities from COVID-19 narratives to pro-Russian narratives after February 2022.
Discussion
Findings show disinformation intensity tracks major news cycles, with surges at crisis onset and subsequent declines driven by reduced public attention and perceived relevance. The decline was steeper for Ukraine-related hoaxes than for pandemic content because the war’s everyday impact on broader European audiences (except Poland) was less direct, contributing to informational fatigue and reduced verification prioritization. Format and typology differences map onto context: during COVID-19, textual fabrications thrived due to urgent information demand, low production costs, and verification resource constraints, resulting in a higher share of fully fabricated items. During the war, language barriers and distance favored visual strategies (combined text+image), making false context the dominant typology through decontextualized photos and videos. Social platforms—especially Facebook and X—were the principal vectors in both crises, leveraging network effects, echo chambers, and algorithmic curation that heighten emotional engagement and polarization. The primary purposes—provocation and constructing a common enemy—align with social media dynamics that reward emotionally charged content, deepening divisions and undermining institutional trust. Interviewees and prior literature support these patterns, while also cautioning that fact-checker collaborations with platforms and the opacity of private messaging apps may skew observed distributions. Overall, the results substantiate the three hypotheses: synchronized peaks with faster decline for war, format/typology shifts across crises, and social networks as dominant channels facilitating polarizing aims.
Conclusion
The study provides a comparative analysis of European disinformation about COVID-19 (2020–2021) and the Russia–Ukraine war (2022–2023), confirming three hypotheses: (1) verified hoax frequency peaks with crisis onset and declines more quickly for the war due to indirect audience impact and waning interest; (2) pandemic disinformation was predominantly textual and fabricated, while war-related disinformation relied on images with false context; (3) social networks, notably Facebook and X, were the primary dissemination channels, with hoaxes designed to provoke emotions and reinforce a common enemy, thereby polarizing audiences. Contributions include a cross-country, multi-platform comparison anchored in standardized coding of formats, typologies, platforms, and purposes, integrating quantitative content analysis with expert interviews. Future research should expand geographic scope beyond four countries, include longer time horizons and additional months, incorporate private-messaging environments, and examine evolving AI-generated content and platform policy impacts to refine understanding of disinformation dynamics.
Limitations
- Sampling is constrained to fact-checks selected by eight verification outlets, introducing selection and platform-collaboration biases.
- Temporal scope limited to March in four consecutive years may not capture seasonal or event-driven variability across other months.
- Geographic scope restricted to four European countries; broader coverage could enhance generalizability.
- Private messaging ecosystems (e.g., WhatsApp, Telegram) are underrepresented due to monitoring challenges, potentially skewing platform distribution.
- Limited evidence of AI-generated deepfakes in the sample may reflect time frame or detection constraints rather than absence in the ecosystem.
Related Publications
Explore these studies to deepen your understanding of the subject.