logo
ResearchBunny Logo
Diverse misinformation: impacts of human biases on detection of deepfakes on networks

Computer Science

Diverse misinformation: impacts of human biases on detection of deepfakes on networks

J. Lovato, J. St-onge, et al.

This compelling research by Juniper Lovato and colleagues explores human biases in identifying deepfakes, revealing that people perform better when videos align with their own demographics. The study's innovative mathematical model suggests diverse social groups may help shield each other from misinformation. Dive into the findings that could change how we understand video deception!

00:00
00:00
~3 min • Beginner • English
Abstract
Social media platforms often assume that users can self-correct against misinformation. However, social media users are not equally susceptible to all misinformation as their biases influence what types of misinformation might thrive and who might be at risk. We call “diverse misinformation” the complex relationships between human biases and demographics represented in misinformation. To investigate how users’ biases impact their susceptibility and their ability to correct each other, we analyze classification of deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: (1) their classification as misinformation is more objective; (2) we can control the demographics of the personas presented; (3) deepfakes are a real-world concern with associated harms that must be better understood. Our paper presents an observational survey (N = 2016) where participants are exposed to videos and asked questions about their attributes, not knowing some might be deepfakes. Our analysis investigates the extent to which different users are duped and which perceived demographics of deepfake personas tend to mislead. We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them. We extrapolate from these results to understand the potential population-level impacts of these biases using a mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that diverse constraints might provide “diverse correction” where friends can protect each other. Altogether, human biases and the attributes of misinformation greatly influence susceptibility, but having a diverse social group may help reduce susceptibility to misinformation.
Publisher
npj Complexity
Published On
Jan 01, 2024
Authors
Juniper Lovato, Jonathan St-Onge, Randall Harp, Gabriela Salazar Lopez, Sean P. Rogers, Ijaz UI Haq, Laurent Hébert-Dufresne, Jeremiah Onaolapo
Tags
deepfakes
human biases
demographics
misinformation
social groups
detection
accuracy
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny