logo
ResearchBunny Logo
Persistent interaction patterns across social media platforms and over time

Sociology

Persistent interaction patterns across social media platforms and over time

M. Avalle, N. D. Marco, et al.

This groundbreaking research conducted by Michele Avalle and colleagues reveals persistent patterns of toxic content across social media platforms over nearly four decades. The study uncovers how human behavior shapes online discourse, highlighting that longer conversations can exhibit higher toxicity but do not always discourage participation. Discover the fascinating dynamics behind digital discussions!

00:00
00:00
Playback language: English
Introduction
The pervasive influence of social media on public discourse and social dynamics has raised significant concerns, particularly regarding online toxicity. While previous research has explored polarization, misinformation, and antisocial behaviors online, a major challenge has been separating inherent human behaviors from the effects of platform design and algorithms. Data limitations often hinder this separation, as platform interactions intricately blend human behavior with algorithmic influences. This study addresses this challenge by focusing on toxicity—a prevalent concern in online conversations—and employing a comparative analysis across diverse social media platforms and time periods. The aim is to identify invariant human patterns in online interactions, transcending platform-specific effects. The lack of non-verbal cues in online communication often contributes to increased incivility, especially in comment sections and political discussions. Exposure to uncivil language can lead to hostile interpretations, influencing judgment and potentially fostering polarized perspectives. Online users tend to seek out information confirming pre-existing beliefs, potentially creating echo chambers. While the prevalence and intensity of echo chambers vary across platforms, suggesting the impact of platform design, isolating intrinsic human behaviors from platform-specific effects remains crucial. This study seeks to disentangle these effects by focusing on the consistent patterns of toxicity across diverse platforms and timeframes.
Literature Review
Existing research extensively addresses polarization, misinformation, and antisocial behaviors on social media. However, most studies focus on specific platforms and topics, limiting the understanding of overarching patterns. The fragmentation of research makes it difficult to determine whether perceptions of online toxicity are accurate or misinterpretations. This study addresses these gaps by examining a broad dataset spanning multiple platforms and time periods, aiming to identify consistent patterns in toxicity and user behavior.
Methodology
The study analyzed approximately 500 million comments from eight social media platforms: Facebook, Gab, Reddit, Telegram, Twitter, Usenet, Voat, and YouTube. The data covered diverse topics and spanned over three decades. Toxicity was defined using Google's Perspective API, a state-of-the-art classifier that identifies toxic comments as "rude, disrespectful, or unreasonable comments that are likely to make people leave a discussion." The researchers validated this definition by comparing its consistency with other toxicity detection tools. The analysis focused on three dimensions: time, platform, and topic. The study characterized conversations based on engagement and participation metrics, including user activity (number of comments per user), thread length (number of comments in a thread), and a participation metric (ratio of unique users to comments in a given interval). The participation metric was tracked over segments of conversations to assess how participation changed as threads evolved. To investigate the relationship between conversation size and toxicity, conversations were grouped according to length using logarithmic binning. The average toxicity was then analyzed for each size interval to observe the trend. The researchers also examined how toxicity evolved during a conversation, using a similar interval-based approach. They investigated whether toxicity discouraged participation, examining the correlation between participation and toxicity trends. To explore the drivers of toxicity, the study used proxies for controversy. Political leaning was inferred from user endorsements of news outlets in some datasets (Facebook News, Twitter News, Twitter Vaccines, and Gab Feed) to quantify the controversy within conversations using the standard deviation of user political leanings. Sentiment analysis, using a pre-trained BERT model, was also employed to assess sentiment distribution within conversations. The relationship between toxicity, endorsement (likes/upvotes), and engagement (measured using a burst detection algorithm) were analyzed to identify potential correlation.
Key Findings
The study revealed several key findings. First, across all platforms, user activity and thread length exhibited heavy-tailed distributions, a pattern consistent with previous research. Second, user participation decreased as conversations evolved, indicating that fewer unique users contribute as the conversation progresses, while those who remain are more active. Third, longer conversations consistently displayed higher toxicity levels, irrespective of platform and topic. Fourth, despite the common assumption, toxicity did not consistently discourage user participation. The average toxicity level remained largely stable throughout the conversation, suggesting that toxicity does not predictably lead to decreased participation. Fifth, the analysis indicated a strong link between controversy and toxicity. Conversations with greater diversity of opinions (measured using the standard deviation of user political leanings) tended to be more toxic. Sixth, the correlation between likes/upvotes and toxicity was not consistently positive, suggesting endorsement doesn't always drive increased toxicity. Seventh, toxicity levels tended to increase during periods of high user engagement. This suggests that toxicity spikes correlate with peaks in user activity. In summary, the findings show significant consistency in conversation dynamics across platforms and time, underscoring the role of human behavior in shaping online discourse. The study challenges assumptions about toxicity’s effect on participation, highlighting the potential for increased toxicity during periods of high engagement and significant opinion diversity.
Discussion
The findings of this study challenge the assumption that toxicity directly leads to decreased participation in online conversations. The consistent patterns observed across various platforms and time periods suggest that human behavior, particularly the dynamic interplay between contrasting opinions and engagement levels, is a key driver of toxicity. The strong correlation between controversy (measured as the diversity of user opinions) and toxicity supports this assertion, suggesting that heightened disagreement can fuel more hostile interactions. However, this does not imply a simple causal relationship; the interplay is likely more complex. While the study doesn't establish direct causality, it strongly suggests that monitoring polarization might serve as an indicator for early intervention in potentially toxic discussions. The research highlights the need for sophisticated moderation techniques that go beyond simply removing toxic comments and instead consider the overall conversation dynamics and potential for escalation.
Conclusion
This study provides compelling evidence of consistent patterns of toxicity in online conversations across diverse platforms and timeframes, challenging prevailing assumptions about the relationship between toxicity and user engagement. The research suggests that focusing on mitigating controversy, specifically managing the dynamic interplay of diverse opinions, may be a more effective strategy for improving online discourse quality than solely targeting toxic comments. Future research should explore the role of other contributing factors, such as the influence of specific users and cultural or demographic aspects, in greater depth.
Limitations
The study acknowledges several limitations. First, the use of political leaning as a proxy for overall opinion diversity might not fully capture the nuances of online discourse. Secondly, political leaning could not be assessed for all platforms, restricting the analysis of controversy to a subset of the datasets. Thirdly, the focus on large-scale quantitative analysis may oversimplify the intricate complexities of individual conversation threads. Fourth, the study doesn't capture the behaviors of passive users, implying that toxicity may still discourage participation despite not necessarily causing active users to leave conversations. Despite these limitations, the study's large-scale analysis offers valuable insights into the consistent patterns of online toxicity.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny