The rapid spread of misinformation on social media, particularly during hazards and disasters, poses significant challenges. The COVID-19 pandemic exemplified the severe consequences of such misinformation, including psychological distress, reduced trust in authorities, and hindered risk management. Similar impacts are seen with natural disasters, as illustrated by the Hurricane Irma example where false death tolls exacerbated social issues. This study addresses the crucial question: what role should algorithms and users play in moderating information during such crises? The research explores existing AI tools and methodologies used to detect and mitigate misinformation related to both natural and anthropogenic hazards and disasters. The goal is to understand the current research landscape and identify gaps to support the development of effective solutions that respect human rights and journalistic ethics.
Literature Review
A review of 13 relevant papers revealed a strong focus on COVID-19, neglecting other hazards. Existing research variables centered on AI tools, misinformation content and impacts, and bot analysis. However, crucial variables like research objectives, research areas, types of hazards addressed, and sponsor locations were largely absent. This meta-analysis addresses these gaps by incorporating these variables into its scope.
Methodology
The study employed a systematic meta-analysis of 266 research papers extracted from Scopus and Web of Science using keywords related to AI, misinformation, social media, and various hazard types. The selection process involved several steps, including initial keyword-based screening of abstracts, followed by full-text analysis to ensure relevance. The PRISMA 2020 flow diagram guided the selection and inclusion process. Descriptive statistics analyzed publication year, research area, hazard type, and sponsor location. Keyword co-occurrence networks visualized relationships between topics, and a flow diagram represented the distribution of studies across various research objectives. Data analysis used VOSviewer and SankeyMATIC software.
Key Findings
The analysis revealed a surge in publications since 2020, largely driven by the COVID-19 pandemic. Computer science dominated the research areas (50.3%), with significant underrepresentation from social sciences (5.8%) and humanities (3.5%). A striking 92% of studies focused on COVID-19, while other hazards received minimal attention. Network analysis confirmed the prominence of COVID-19 and social media keywords, revealing two main clusters: one focused on COVID-19 information and the other on AI techniques. The majority of studies aimed to detect misinformation (68%), while exploring other objectives like impact assessment and countermeasure development. The United States was the leading funder (25 papers), followed by China, Spain, and Italy (14-16 papers). Correlation between high COVID-19 death rates and research output was observed in some, but not all, heavily affected countries.
Discussion
The findings highlight a considerable research focus on COVID-19 misinformation and AI-driven detection methods, neglecting other hazards and crucial social science perspectives. The underrepresentation of social sciences raises concerns about the ethical and societal implications of AI tools. The dominance of detection over other objectives points toward a need for research addressing the broader impact of misinformation and the development of mitigation strategies. The skewed funding landscape raises questions about global equity in research efforts. The lack of exploration into the balance between algorithmic recommendations and user choices is another crucial gap that needs attention.
Conclusion
This meta-analysis reveals significant gaps in the research on AI tools for combating social media misinformation during hazards and disasters. Future research should broaden the scope beyond COVID-19, incorporate diverse perspectives from the social sciences and humanities, examine the interplay between algorithms and users, and address geographical disparities in research funding. Developing effective solutions requires a comprehensive approach, prioritizing both technological advancements and a deep understanding of human behavior and societal context.
Limitations
The study's scope is limited to papers indexed in Scopus and Web of Science, potentially excluding relevant grey literature. The reliance on keywords for paper selection might introduce bias. The interpretation of funding data is limited by the availability of information in the databases. The focus on abstracts and keywords may not fully capture the nuances of individual research projects.
Related Publications
Explore these studies to deepen your understanding of the subject.