logo
ResearchBunny Logo
Persistent interaction patterns across social media platforms and over time

Social Work

Persistent interaction patterns across social media platforms and over time

M. Avalle, N. D. Marco, et al.

Dive into the dynamics of online discourse with this insightful study by Michele Avalle and colleagues. This research uncovers persistent patterns of toxic content across eight social media platforms over 34 years. Explore how human behavior shapes hostile interactions and discover the nuances of user participation amidst rising toxicity.

00:00
00:00
Playback language: English
Introduction
The proliferation of social media has profoundly impacted public discourse and social dynamics, raising concerns about toxicity and its effects. Previous research has explored issues like polarization and misinformation, but separating inherent human behaviors from platform effects has proven challenging due to data limitations. This study addresses this challenge by focusing on toxicity across different platforms and time periods to identify invariant human patterns in online conversations. The lack of non-verbal cues online can contribute to increased incivility, especially in comment sections and political discussions. Users tend to seek information confirming pre-existing beliefs, leading to echo chambers that reinforce shared narratives. The design and algorithms of social media platforms, aiming to maximize user engagement, can significantly shape online social dynamics, making it hard to distinguish organic interaction from platform influence. This research uses a comparative analysis of conversations across eight platforms over 34 years to uncover consistent patterns in toxicity dynamics and shed light on inherently invariant human behaviors in online interactions.
Literature Review
Existing research extensively examines polarization, misinformation, and antisocial behaviors on social media. However, most studies focus on specific platforms and topics, hindering a comprehensive understanding of online toxicity dynamics. The fragmentation of research complicates the assessment of whether perceptions of online toxicity are accurate or misconceptions. Key questions about the inherent toxicity of online discussions and differences between toxic and non-toxic conversations remain largely unanswered. Studies examining online toxicity often rely on specific platforms and topics, limiting generalizability. The inconsistent definitions of toxicity across different studies also pose challenges to comparative analysis. Furthermore, the efficacy of automated toxicity detection systems has been debated, highlighting the need for robust and validated methodologies.
Methodology
This study analyzed approximately 500 million comments from eight platforms (Facebook, Gab, Reddit, Telegram, Twitter, Usenet, Voat, and YouTube) across diverse topics and over three decades. The researchers used Google's Perspective API, a state-of-the-art toxicity classifier, to detect toxic language, defining toxicity as "a rude, disrespectful, or unreasonable comment likely to make someone leave a discussion." The definition's consistency was validated using other detection tools. The analysis focused on three dimensions: time, platform, and topic. The study first characterized conversations macroscopically by engagement and participation, then analyzed conversation toxicity before and during unfolding. Finally, it explored potential drivers of toxic speech. Conversation analysis considered user activity (number of comments posted) and thread length. The participation metric, calculated across conversation intervals, measured the ratio of unique users to comments within each interval. Toxicity was measured using the Perspective API's toxicity score, with a threshold of 0.6 defining a comment as toxic. The study examined toxic speech prevalence, highly toxic users and conversations, and the correlation between conversation length and toxicity. Controversy was assessed using proxies, such as the spread of political leanings among users and sentiment discrepancies, to examine the relationship between controversy, toxicity, and conversation size. The endorsement of toxic content (likes/upvotes) and the relationship between toxicity and user engagement (burst analysis) were also investigated. Data analysis included linear regression, the Mann-Kendall test, and Pearson's correlation to assess the significance of the trends and relationships observed.
Key Findings
The study revealed consistent macroscopic patterns across all platforms, regardless of platform features or moderation policies. These included heavy-tailed distributions of user activity and thread length, consistent with previous studies. User participation decreased as conversations evolved, while the activity of remaining participants increased. Across all platforms, longer conversations consistently exhibited higher toxicity, but the rate of toxic comments remained mostly stable, challenging the assumption that toxicity discourages participation or invariably escalates. Statistical analysis revealed that the average toxicity level remained mostly stable throughout conversations, independent of user participation. The Pearson correlation between user participation and toxicity trends showed no consistent pattern across datasets. A positive correlation emerged between controversy (measured by the spread of political leanings or sentiment discrepancies) and conversation toxicity, particularly in moderated platforms. The amount of likes/upvotes wasn't consistently correlated with increased toxicity. Finally, toxicity increased with user engagement peaks, as measured by a burst detection algorithm.
Discussion
The findings challenge common assumptions about online conversation dynamics. Toxicity doesn't act as a deterrent to user participation, suggesting that simply removing toxic comments might not be sufficient to mitigate its effects. Instead, the study points to the importance of controversy and opinion polarization in driving toxicity. Monitoring polarization could be a more effective strategy for early intervention in online discussions. However, the study acknowledges that the dynamics are multifaceted, with other factors, such as specific subject matter, influential users, posting time, cultural and demographic aspects potentially influencing toxicity and engagement. While extremely toxic users are rare, small groups of highly toxic and engaged users could still impact conversation dynamics. Future research could explore these aspects in more detail.
Conclusion
This study reveals persistent patterns of toxicity in online conversations across platforms, topics, and time, emphasizing the importance of human behavior in shaping online discourse. Toxicity, as traditionally defined, doesn't necessarily deter user engagement, highlighting the need for nuanced content moderation strategies that go beyond simple removal of toxic comments. The study's findings provide valuable insights for developing more effective and context-aware moderation tools.
Limitations
The study acknowledges limitations such as using political leaning as a proxy for broader viewpoints, potentially overlooking nuances in opinions. The inability to assign political leanings to all platforms restricts the generalizability of controversy analysis. The comparative analysis of heterogeneous datasets from various sources and periods may result in reductionism, simplifying the unique characteristics of individual discussions. Although a robust and widely used methodology for this kind of study was employed, some nuances of each particular conversation might not be appropriately captured in the analyses. Furthermore, the focus on active users excludes passive users, who might be discouraged from participation by toxicity. The application of Perspective API, primarily trained on modern texts, to older Usenet data might have introduced some biases; however validation experiments mitigates concerns. The study accounts for potential biases in the findings.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny