logo
ResearchBunny Logo
Faking the war: fake posts on Turkish social media during the Russia-Ukraine war

Political Science

Faking the war: fake posts on Turkish social media during the Russia-Ukraine war

O. Uluşan and İ. Özejder

Explore the intriguing dynamics of fake posts on Turkish social media during the Russia-Ukraine war. This research, conducted by Oshan Uluşan and İbrahim Özejder, unveils how ideological polarization and misinformation have taken center stage, revealing common themes like hate speech, humor, and conspiracy theories. Discover how the war's narrative is shaped in unexpected ways.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper examines the rising prominence and impact of fake news and misleading content in the digital era, especially around major global events such as the 2016 US election, COVID-19, and Russia’s invasion of Ukraine. It frames social media as a key vector for rapid dissemination of misinformation due to low verification barriers and algorithmic amplification. The study argues that the Russia-Ukraine war intensified polarization and ideological divergence on Turkish social media, where conflicting narratives and manipulated content proliferated. It positions an analysis of fake posts as a means to understand how polarization manifests online. The authors focus on Turkish-language social media content during the war, collected via Turkish fact-checking platforms, and pose the research question: In the context of the Russia-Ukraine war, which discursive themes are shaped around fake social media posts circulated on Turkish social media?
Literature Review
The literature review discusses competing definitions of fake news, ranging from content veracity to sender intent, and the conceptualization of fake news as a floating signifier used in power struggles. It highlights how crises (pandemics, wars) catalyze misinformation and the role of social media in blending true and false information, weakening traditional media’s gatekeeping. Prior work links social media affordances and algorithms to polarization, echo chambers, and increased virality of divisive content. In Turkey, a polarized media system aligned with political parties, online trolling, and reported orchestrated campaigns contribute to a contentious information environment. Studies on the Russia-Ukraine war address propaganda, diplomacy, meme use, TikTok dynamics, ideological language online, and fact-checking analyses; they underline how war-related misinformation spreads, draws on historical/ideological narratives (e.g., Nazism), and fuels polarized discourses. The review situates the present study as a qualitative, multimodal discourse analysis of Turkish social media’s fake posts about the war to identify themes shaping polarized narratives.
Methodology
Design: Multimodal Critical Discourse Analysis (MCDA) informed by Critical Discourse Analysis (CDA), examining how linguistic and visual choices reveal broader discourses and power relations. The analysis leverages representational, interactive, and compositional metafunctions to interpret text, images, video, and their interplay. Data sources and sampling: Fake posts were sourced from four Turkish fact-checking platforms actively verifying Russia-Ukraine war content: Teyit, Doğruluk Payı, Doğrula, and Malumatfuruş (three IFCN members). Timeframe: February 23, 2022 to June 10, 2023. Initial pool: 243 items; final analytic sample: 204 items cross-validated as false. Searches used Turkish keywords (e.g., “Rusya-Ukrayna Savaşı”, “Ukrayna”, “Rusya”). Modalities: Facebook, Twitter (X), and TikTok content categorized into video, text, image, and hashtag; modality coherence examined jointly (e.g., image-text combinations in a single post). Coding: Emergent, inductive coding to identify themes; two coders independently reviewed visual (images, videos, graphics) and textual elements, integrating modalities to determine discourse and themes. Inter-coder reliability: Krippendorff’s Alpha k≈0.76 (measured via Freelon’s tool). Iterative reconciliation achieved consensus. Thematic counts: war reporting (n=71), ideological misrepresentation (n=55), humor (n=31), hate speech (n=23), conspiracy theories (n=24). Rationale: Reliance on IFCN-affiliated fact-checkers mitigated subjective bias in identifying fake content and enhanced data credibility.
Key Findings
- Five themes structure fake content: war reporting (35%), ideological misrepresentation (27%), humor (15%), hate speech (11%), conspiracy theories (12%). Counts: war reporting n=71; ideological misrepresentation n=55; humor n=31; hate speech n=23; conspiracy theories n=24 (N=204). - War reporting: Posts repurpose decontextualized images/videos with news-like framing, clickbait, and historical/cultural references to (re)construct the war narrative rather than inform. A dominant sub-theme frames Ukraine as isolated/abandoned, urging NATO intervention or Turkish alignment. - Ideological misrepresentation: Both pro-Ukrainian and pro-Russian posts deploy Nazi symbolism and distorted historical analogies. Examples include falsely labeling Ukrainian military insignia as Nazi symbols and manipulated covers likening Putin to Hitler, embedding polarized ideological binaries (“Nazi Ukrainians” vs “Nazi Russians”). - Humor: Decontextualized humorous/trolling content (e.g., ambiguous videos of explosions) blurs seriousness and reality of war events, facilitating spread and engagement while embedding ideological cues. - Hate speech: Zelensky is targeted in pro-Russian fake posts with xenophobic/LGBT-hostile narratives and portrayed as a Western puppet; conversely, pro-Ukrainian fake posts glorify him as a front-line soldier, using miscontextualized imagery to bolster leadership cues. - Conspiracy theories: Narratives about “foreign powers,” deep state, Rothschilds, and staged war footage claim the war is orchestrated or unreal, repackaging familiar conspiracies to sow epistemic distrust. Hybrid conspiracy narratives merge personal opinions with conspiracy frames, enhancing virality. - Overall, fake content concentrates around ideological polarization; misinformation and decontextualized humor blur the true context of the war; hate speech and conspiracy narratives further distort and radicalize discourse.
Discussion
The findings address the research question by showing that fake posts on Turkish social media about the Russia-Ukraine war cluster into five discursive themes that intensify polarization. Decontextualized multimodal content enables reconstruction of war narratives aligned with partisan identities, reinforcing in-group/out-group divisions. Ideological misrepresentation via Nazi symbolism and historical analogies frames opponents as existential threats. Humor and trolling normalize misleading depictions, facilitating rapid diffusion within echo chambers. Hate speech personalizes attacks—especially on Zelensky—linking them to broader xenophobic and anti-LGBT rhetoric. Conspiracy theories recycle familiar tropes about “foreign powers” and staged conflicts, undermining trust in evidence and journalism, and enabling alternative, unsubstantiated narratives. These dynamics illustrate how algorithmic and social processes on Turkish social media promote polarized, antagonistic discourses, potentially mobilizing offline actions and legitimizing hostility while obscuring accurate information about the war.
Conclusion
This study provides a qualitative, multimodal discourse analysis of fake posts on Turkish social media concerning the Russia-Ukraine war, identifying five core themes—war reporting, ideological misrepresentation, humor, hate speech, and conspiracy theories—that collectively reconstruct and polarize the war’s meanings. The analysis demonstrates how manipulated multimodal content, historical/ideological symbols, and humor/conspiracy frames distort context and deepen polarization in Turkey’s already contentious media environment. The work contributes to literature by foregrounding the qualitative and discursive forms of deceptive content during wartime and by mapping how such content circulates and resonates in a polarized setting. Future research could expand to other platforms and languages, compare cross-national discursive patterns, integrate network/quantitative analyses of diffusion, and evaluate interventions (e.g., platform design, media literacy) targeting multimodal misinformation.
Limitations
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny