logo
ResearchBunny Logo
Systematic meta-analysis of research on AI tools to deal with misinformation on social media during natural and anthropogenic hazards and disasters

Interdisciplinary Studies

Systematic meta-analysis of research on AI tools to deal with misinformation on social media during natural and anthropogenic hazards and disasters

R. Vicar and N. Komendantova

This meta-analysis by Rosa Vicar and Nadejda Komendantova delves into how AI tools can tackle misinformation on social media during crises. With a focus on COVID-19, the study highlights the lack of social sciences input and emphasizes the need for a balanced approach between algorithms and user agency.

00:00
00:00
~3 min • Beginner • English
Abstract
The spread of misinformation on social media has led to the development of artificial intelligence (AI) tools to deal with this phenomenon. These tools are particularly needed when misinformation relates to natural or anthropogenic disasters such as the COVID-19 pandemic. The major research question of our work was as follows: what kind of gatekeepers (i.e. news moderators) do we wish social media algorithms and users to be when misinformation on hazards and disasters is being dealt with? To address this question, we carried out a meta-analysis of studies published in Scopus and Web of Science. We extracted 668 papers that contained keyterms related to the topic of “AI tools to deal with misinformation on social media during hazards and disasters.” The methodology included several steps. First, we selected 13 review papers to identify relevant variables and refine the scope of our meta-analysis. Then we screened the rest of the papers and identified 266 publications as being significant for our research goals. For each eligible paper, we analyzed its objective, sponsor’s location, year of publication, research area, type of hazard, and related topics. As methods of analysis, we applied: descriptive statistics, network representation of keyword co-occurrences, and flow representation of research rationale. Our results show that few studies come from the social sciences (5.8%) and humanities (3.5%), and that most of those papers are dedicated to the COVID-19 risk (92%). Most of the studies deal with the question of detecting misinformation (68%). Few countries are major funders of the development of this topic. These results allow some inferences. Social sciences and humanities seem under-represented for a topic that is strongly connected to human reasoning. A reflection on the optimum balance between algorithm recommendations and user choices seems to be missing. Research results on the pandemic could be exploited to enhance research advances and other risks.
Publisher
Humanities and Social Sciences Communications
Published On
Jun 17, 2023
Authors
Rosa Vicar, Nadejda Komendantova
Tags
AI tools
misinformation
COVID-19
social media
disasters
algorithmic recommendations
user choices
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny