Interdisciplinary Studies
Angry by design: toxic communication and technical architectures
L. Munn
The paper asks how platform design architectures may facilitate toxic communication by promoting polarizing, impulsive, or antagonistic behaviors. It contrasts prevailing approaches—automated detection and human moderation—and the assumption that hate speech stems from inherently hateful individuals, by adopting a design-centric lens. Platforms are treated as deliberately constructed environments whose interfaces, algorithms, and affordances invite certain forms of participation and suppress others. Given the documented rise of online hate, the real-world harms associated with social media–enabled hate (e.g., violence linked to content on Facebook, Gab, and 8chan), and evidence that social media amplifies anger and misinformation, understanding the role of design is urgent. The study focuses on Facebook and YouTube due to their global scale, everyday influence, and repeated links to toxic speech and radicalization.
Prior work on automated moderation highlights substantial investment in toxicity detection but also fundamental limits of AI in grasping cultural nuance, context, and power dynamics. Human moderation has expanded but carries severe psychological tolls and productivity pressures. Designers and technologists have acknowledged harmful byproducts of platform design (addictiveness, privileging base impulses), and empirical studies show that falsehoods spread faster than truth and that anger spreads contagiously on social media. Journalistic and scholarly accounts link platform affordances and incentive structures to escalating anger and eventual hate speech. Documented cases connect online hate to offline violence (e.g., El Paso, Pittsburgh, Christchurch; anti-Rohingya content on Facebook), and research and UN reports warn that online hate can incite grave offline harms. Scholarship on radicalization suggests pipelines wherein users move from milder to more extreme content online. A nascent design ethics discourse proposes humane, well-being-oriented platforms. The paper situates itself within this body by foregrounding how interface architectures and algorithmic curation shape discourse.
The study employs a qualitative, design-centric analysis of two large platforms (Facebook and YouTube). It identifies and interrogates key design elements—Facebook’s News Feed; YouTube’s recommendation system and comment features—asking how they operate, what logics guide them, and what communicative behaviors they afford or inhibit. The analysis synthesizes platform documentation, prior technical descriptions (e.g., YouTube’s recommendation architecture), reporting, and secondary literature from designers, engineers, and researchers. It is supplemented by two unstructured interviews: one with a young social media user and another with a former online community manager, providing vernacular perspectives on how design is experienced and navigated in practice. The approach emphasizes how engagement-driven metrics embedded in interfaces and algorithms create feedback loops that shape user behavior and discourse.
- Facebook: The News Feed is the central interface that algorithmically prioritizes content by engagement rather than chronology. High-engagement content is frequently incendiary and polarizing, which the platform’s own internal research found to be fed to users in order to gain attention and time-on-platform. This creates a stimulus-response loop: outrage-inducing content is surfaced, prompting reactive sharing that triggers “outrage cascades,” normalizing antagonistic discourse. Design affordances make sharing effortless, lowering barriers to outrage expression and reinforcing it. Facebook’s scale (2.41B MAUs) and average use time (≈58 minutes/day) magnify these effects.
- YouTube: Recommendations are the primary gateway to content; over 70% of watch time comes from recommended videos. The recommendation system operates in two stages (candidate generation and ranking) to maximize engagement and predict the next watched video in real time. Engagement-optimized recommendations disproportionately elevate borderline, incendiary, or divisive content, establishing feedback loops that incentivize creators to produce more of it. The dynamic, next-video prediction fosters self-similar but more intense content sequences, nudging users toward more extreme material over time. Empirical work finds consistent migration from milder to more extreme content (e.g., audit of ~330,925 videos across 349 channels showing shifts from Alt-Lite to Alt-Right). Users report prolonged exposure shaping worldview and increasing anger.
- YouTube comments: The comment system’s design (e.g., upvote/downvote dynamics that reward any engagement; weak identity/reputation signals) surfaces provocative, polarizing comments and contributes to a reputation for toxicity, including predatory behaviors in comment threads on minors’ videos.
- Values and incentives: Across both platforms, engagement-centric incentives embedded in design lead to divisive and harmful outcomes: divisive content is more engaging, so design that optimizes for engagement systematically privileges it, to the detriment of discourse quality and vulnerable communities.
Findings support the central hypothesis that platform design architectures are not neutral and actively shape discourse toward toxic communication when optimized for engagement. Facebook’s engagement-sorted Feed amplifies outrage content and facilitates rapid, low-effort propagation, normalizing antagonistic interactions. YouTube’s engagement-driven, next-video recommendations steer users along trajectories that often intensify political and cultural extremity, while its comment system rewards provocative contributions. Together, these architectures create feedback loops that influence users gradually but powerfully over extended periods of media consumption. The paper also addresses counter-arguments (e.g., supply–demand explanations) by emphasizing sociotechnical dynamics and the cumulative cognitive effects of inhabiting algorithmically curated environments. Design implications include reorienting values away from raw engagement toward well-being, civility, and diversity of exposure, and reconfiguring affordances to slow impulsive reactions, add friction, foster empathy, and strengthen accountability mechanisms. This reframing positions design as a lever to mitigate toxic communication without relying solely on detection/removal or scaling human moderation.
The study demonstrates that engagement-optimized design choices on Facebook and YouTube contribute to toxic communication by privileging incendiary content, easing outrage expression, and nudging users toward more extreme material. It proposes concrete design avenues: for Facebook, experimenting with alternative Feed priorities (e.g., hyperlocal or close ties) and adding temporal or empathetic prompts to slow and reflect on reactive posts; for YouTube, broadening or diversifying recommendations, incorporating long-term well-being signals, elevating curated, human-driven playlists, and redesigning comments with reputation/accountability systems. While economic incentives tied to engagement and advertising may impede such redesigns, advertiser backlash and user harms suggest the need for alternative value structures. The paper’s primary contribution is to center design as a causal and actionable factor in toxic communication and to outline practical redesign directions. Future research should quantify design influence across demographics and extend the design-focused analysis to additional platforms (e.g., Reddit, TikTok, 4chan).
The analysis is qualitative and design-focused, relying on secondary sources, public technical descriptions of proprietary systems, and two unstructured interviews; it cannot fully resolve black-box algorithmic behaviors. The degree and mechanisms of design influence on individuals—and how effects vary by age, gender, class, culture, and context—are not precisely measured. The study covers only two platforms and selected features (Feed, recommendations, comments), limiting generalizability. Causal claims about radicalization pathways are supported by correlational and audit studies but not randomized trials. The proposed interventions are speculative and not empirically validated within this work.
Related Publications
Explore these studies to deepen your understanding of the subject.

