logo
ResearchBunny Logo
From online hate speech to offline hate crime: the role of inflammatory language in forecasting violence against migrant and LGBT communities

Sociology

From online hate speech to offline hate crime: the role of inflammatory language in forecasting violence against migrant and LGBT communities

C. A. Calderón, P. S. Holgado, et al.

This study explores how online hate speech can predict offline hate crimes against migrants and the LGBT community in Spain, revealing that toxic language is a crucial indicator. Conducted by a team from University of Salamanca and the National Office for Combating Hate Crimes, the findings highlight the alarming relationship between social media posts and crimes.

00:00
00:00
~3 min • Beginner • English
Abstract
Social media messages often provide insights into offline behaviors. Although hate speech proliferates rapidly across social media platforms, it is rarely recognized as a cybercrime, even when it may be linked to offline hate crimes that typically involve physical violence. This paper aims to anticipate violent acts by analyzing online hate speech (hatred, toxicity, and sentiment) and comparing it to offline hate crime. The dataset for this preregistered study included social media posts from X (previously called Twitter) and Facebook and internal police records of hate crimes reported in Spain between 2016 and 2018. After conducting preliminary data analysis to check the moderate temporal correlation, we used time series analysis to develop computational models (VAR, GLMNet, and XGBTree) to predict four time periods of these rare events on a daily and weekly basis. Forty-eight models were run to forecast two types of offline hate crimes, those against migrants and those against the LGBT community. The best model for migrant crime achieved an R² of 64%, while that for LGBT crime reached 53%. According to the best ML models, the weekly aggregations out-performed the daily aggregations, the national models outperformed those geolocated in Madrid, and those about migration were more effective than those about LGBT people. Moreover, toxic language outperformed hatred and sentiment analysis, Facebook posts were better predictors than tweets, and in most cases, speech temporally preceded crime. Although we do not make any claims about causation, we conclude that online inflammatory language could be a leading indicator for detecting potential hate crimes acts and that these models can have practical applications for preventing these crimes.
Publisher
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS
Published On
Oct 15, 2024
Authors
Carlos Arcila Calderón, Patricia Sánchez Holgado, Jesús Gómez, Marcos Barbosa, Haodong Qi, Alberto Matilla, Pilar Amado, Alejandro Guzmán, Daniel López-Matías, Tomás Fernández-Villazala
Tags
hate speech
hate crimes
migrants
LGBT community
social media
predictive modeling
Spain
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny