Introduction
Social networks have become primary communication channels between citizens and governments, accelerating interactions and creating avenues for mass information dissemination. This has significantly impacted diplomacy, allowing for closer contact between international actors and local populations. While social media offers benefits like building communities and optimizing budgets for diplomatic efforts, some governments utilize this channel to spread disinformation, influencing public opinion to serve their national interests. The 2016 US election, where Russia allegedly interfered through disinformation campaigns, and the COVID-19 pandemic, where Russian media spread misinformation about the virus's origins, serve as prominent examples. While case studies exist documenting disinformation in diplomacy, understanding its dynamics remains challenging due to information scarcity and difficulties in tracing origins. This study aims to simulate disinformation propagation, addressing how disinformation elements derived from social media diplomacy interact to affect susceptible populations and evaluating the impact of bots, trolls, and echo chambers.
Literature Review
The concept of disinformation, originating in the early 20th century, has evolved to encompass intentionally false information aimed at deceiving and manipulating the target audience. This is different from propaganda, which aims for long-term control. Disinformation in diplomacy uses social media to spread false information to destabilize foreign states, benefiting the sender through societal discord or policy changes. While disinformation has historically used traditional media, social media amplifies its spread and impact through mechanisms like echo chambers, bots, and trolls. Previous studies primarily focus on documenting cases of state-sponsored disinformation, particularly from Russia and China, which have interfered in elections, polarized opinions, and eroded trust in traditional media. However, existing models often lack elements specific to diplomatic disinformation strategies, neglecting the strategic planning involved in maximizing the effect of the message. This study aims to bridge this gap.
Methodology
To address the research questions, a computational simulation model was developed using system dynamics. This approach accounts for the complexity and non-linearity of the disinformation system. The model incorporates elements identified in the literature, including the target population (PO), susceptible population (PS), misinformed population (PD), informed population (PIn), unsubscribed population (PU), organic outreach (ao_n), paid outreach (ap), invitation-based outreach (ai), bots (B), and trolls (T). The model was designed with seven levels representing these elements. Additional variables included invitation fees, outreach effectiveness rates, cost-per-mille (CPM), campaign effectiveness, resubscription rates, bot and troll contact rates, delayed disinformation, engagement levels, and echo chamber effects. The model was formalized using a system of differential equations, with parameters based on existing research and the US Senate Select Committee reports on Russian interference in the 2016 election. The model's validity was assessed through sensitivity analysis, systematically varying parameters and observing changes in model behavior. 100 simulations were conducted with parameters varied by ±10%, ensuring consistency. The Wilcoxon test was used to assess statistically significant differences between the original model and five simulations altering specific parameters to answer the research questions.
Key Findings
The initial simulation showed that, over 180 days, the disinformation agent significantly reduced the target population (PO) by 84.3%, reaching 267,275 individuals. The misinformed population (PD) was 135,463 while the informed population (PIn) was 148,117, with only a small percentage (11,779) unsubscribing. Further simulations explored specific parameters:
* **Sim-1 (No Paid Outreach):** Eliminating paid campaigns reduced the misinformed population significantly (1,355,864 fewer people).
* **Sim-2 (No Bots):** Removing bots reduced the misinformed population by 514,360.
* **Sim-3 (No Trolls):** Removing trolls decreased the misinformed population by 497,343.
* **Sim-4 (Delayed Disinformation):** Delaying the start of disinformation resulted in an increase in the susceptible population, although the changes in informed and misinformed populations were minor.
* **Sim-5 (Varying Engagement):** Lower engagement reduced the misinformed population substantially, while high engagement drastically increased it.
The model also demonstrates the exponential growth of bots and trolls from initial values of 1 and 10 to a substantial number over time.
Discussion
The findings highlight the crucial role of paid outreach, bots, and trolls in effective disinformation campaigns. The model demonstrates that the systematic use of these elements allows disinformation agents to efficiently reach and influence a large portion of the target population. While some individuals become informed and unsubscribe, the overall impact remains significant. The findings support the need to examine the role of social media algorithms in amplifying disinformation. The results contribute to understanding disinformation's macro-level dynamics in international relations, moving beyond local-level analyses. The model’s adaptability across different social media platforms enhances its relevance to diverse contexts.
Conclusion
This study provides a novel simulation model for understanding the spread of disinformation as a diplomatic strategy, integrating key elements often analyzed separately. The results underscore the significance of paid campaigns, bots, and trolls in disinformation success. Future research should consider incorporating additional elements, such as the role of fact-checking efforts and media literacy, into the model. Further analysis with randomized parameters will improve the model’s robustness. The model serves as a valuable tool for analyzing disinformation’s spread and developing counter-strategies.
Limitations
The simulations were conducted under the *ceteris paribus* assumption, meaning only one parameter was changed at a time. Further investigation with multiple parameter modifications simultaneously is needed. The model is deterministic, neglecting potential stochastic factors influencing disinformation spread. The model reflects current methods; future advancements in disinformation tactics may necessitate model updates. While the model’s sensitivity analysis provides validation, further validation techniques could strengthen its robustness.
Related Publications
Explore these studies to deepen your understanding of the subject.