logo
ResearchBunny Logo
Blocking of counter-partisan accounts drives political assortment on Twitter

Political Science

Blocking of counter-partisan accounts drives political assortment on Twitter

C. Martel, M. Mosleh, et al.

Field experiments on Twitter reveal that political assortment isn't just due to homophily—people also actively prevent cross-partisan ties by blocking. Using human-like bot accounts, users were 12 times more likely to block counter-partisan accounts in the first study and 4 times more likely in the second versus neutral or copartisan accounts. A survey replication showed blocking often aims to avoid seeing content, with Democrats blocking counter-partisans more due to low-quality or slanted posts. Research conducted by Cameron Martel, Mohsen Mosleh, Qi Yang, Tauhid Zaman, and David G. Rand.... show more
Introduction

The study investigates how political assortment on social media forms not only through preferential tie formation (homophily) but also through active prevention of social ties via blocking. While prior work shows Americans preferentially connect with copartisans offline and online, causal mechanisms beyond homophily are understudied. The authors propose that blocking—a proactive measure that prevents future social interactions and content exposure—may be a key driver of partisan network segregation. They ask: (i) Do Twitter users block counter-partisan accounts more than copartisan or neutral accounts? (ii) Are there partisan asymmetries in blocking between Democrats and Republicans, and why? (iii) What reasons do individuals give for blocking? The work’s importance lies in expanding the causal foundations of political assortment to include tie prevention behaviors that shape information flow and echo chambers online.

Literature Review

Prior research documents strong partisan assortment in offline interactions, geographic residence, and family ties, as well as on social media platforms such as Facebook and Twitter. Experimental work has established causal effects of homophily in domains like dating, economic interactions, residential preferences, and social tie reciprocation on Twitter, where copartisan follow-back rates are substantially higher than counter-partisan. Additional mechanisms proposed include acrophily (preference for more extreme views) and tie dissolution among dissimilar individuals. Blocking as active tie prevention has been relatively overlooked; limited prior work (a survey experiment in the U.S. and a field experiment in Brazil) suggests greater blocking of counter-partisans. Mixed evidence exists about partisan asymmetries in online assortment. The present study addresses these gaps by experimentally examining counter-partisan blocking, its magnitude relative to tie formation, partisan asymmetries, and underlying motivations.

Methodology

The authors conducted two large-scale Twitter field experiments and a survey experiment, supplemented by additional surveys on content-based blocking.

Field Experiment 1 (Dec 2–11, 2020): Ten human-looking bot accounts were created (5 Democrat-identified, 5 Republican-identified) with GAN-generated profile photos and White, male-appearing profiles. Bots indicated partisanship in bios and retweeted aligned mainstream news every ~3 days (Democratic: MSNBC, Washington Post, NBC News, The Atlantic; Republican: Fox News, The Dispatch, National Review, American Conservative). Each bot had ~500 neutral followers to appear authentic. The team identified politically active users via partisan hashtags, estimated user partisanship based on shared content, and excluded high-status/inactive or unclassifiable accounts. Using stratified randomization (by follower count [log], recent activity, baseline reciprocity, user partisanship, and extremity), n=2,010 users were randomly assigned to be followed by a copartisan or counter-partisan bot. Outcomes were whether users followed back or blocked the bot. Analysis used linear regression with counter-partisanship, user partisanship, and interaction terms.

Field Experiment 2 (Mar 9–May 10, 2022): Design mirrored FE1 but added a politically neutral bot condition (retweeting Reuters, NPR, BBC, AP). Candidate users retweeted MSNBC or Fox News; the same stratification variables and day-level assignment balance were used. Bots had ~250 initial neutral followers. n=2,003 users were followed by copartisan, neutral, or counter-partisan bots. Outcomes and analytic strategy (linear models with shared partisanship indicators, user partisanship, and interactions) matched FE1.

Survey Experiment (preregistered): Recruited n=606 U.S. Twitter users via Lucid (quotas approximating Twitter demographics). Participants saw a simulated Twitter notification that an account followed them. The account randomly appeared Democrat-favoring (#DefendingOurDemocracy #BidenHarris2014), Republican-favoring (#MAGA #Trump2024), or neutral (Product Manager | Amateur chef | #Photographer). A prior-engagement manipulation indicated whether the account had liked three of the participant’s recent tweets. Profiles showed no posting history to isolate identity cues. Participants chose to follow back, ignore, or block, and then reported their primary reason if blocking (from nine pre-specified options). Linear models predicted blocking with copartisan and counter-partisan indicators, participant partisanship (z-scored), prior-engagement, and interactions.

Supplementary surveys: A larger survey (n=3,057, Lucid) asked about likelihood to mute/block/unfollow users sharing various content types (e.g., inaccurate/false, mean/nasty, racist/sexist, “woke/cancel culture,” religious doubt), and reactions to users praising out-party vs criticizing in-party (for elites and partisans). Another supplementary experiment tested reactions to an explicitly toxic trolling profile.

Content characterization: To probe mechanisms for partisan asymmetries, the authors quantified bot-retweeted content quality using domain-level quality ratings, partisan slant using domain lean measures, and toxicity using Google Perspective API. Comparisons were made between Democratic and Republican bots in both field experiments.

Ethics: Field experiments received COUHES waiver (MIT #1907910465). Survey experiments received exempt evaluations (MIT COUHES E-4690). Data and materials are available at https://osf.io/46aqr/.

Key Findings
  • Preferential tie formation replicated: In FE1, users were significantly less likely to follow back a counter-partisan bot than a copartisan bot (b=-0.048, SE=0.010, t(2006)=-4.905, P<0.001), with no partisan asymmetry in follow-back.
  • Blocking is a powerful driver of assortment: In FE1, users were roughly 12 times more likely to block counter-partisan than copartisan accounts (b=0.057, SE=0.008, t(2006)=7.247, P<0.001). A marked asymmetry emerged: Democratic users were about 26 times more likely to block Republican bots than Democratic bots, whereas Republican users were about 3 times more likely to block Democratic bots than Republican bots (interaction b=-0.031, SE=0.008, t(2006)=-3.893, P<0.001).
  • FE2 with neutral control: Users were about 4.32 times more likely to block a counter-partisan account than a politically neutral account (b=0.068, SE=0.008, t(3994)=8.993, P<0.001), with no difference between copartisan and neutral (b=-0.005, SE=0.008, P=0.487). Follow-back was 1.47 times higher for copartisan vs neutral (b=0.055, SE=0.012, t(3994)=4.582, P<0.001) and significantly lower for counter-partisan vs neutral (b=-0.072, SE=0.012, t(3994)=-6.013, P<0.001). Asymmetry replicated: more Democratic users were ~4.4 times more likely to block counter-partisan vs neutral, Republicans ~3.6 times (interaction b=-0.042, SE=0.008, t(3994)=-5.486, P<0.001).
  • Survey replication: Participants were about 3 times more likely to block a counter-partisan profile vs neutral (b=0.089, SE=0.027, t(605)=3.31, P=0.001); no difference between copartisan and neutral (b=0.006, SE=0.025, P=0.803). No significant partisan asymmetry in the survey (interaction b=0.023, SE=0.027, P=0.396), and the asymmetry was significantly smaller than in FE2 (z=2.379, P=0.017).
  • Reasons for blocking: The most common reason was to avoid seeing the blocked user’s content (27%), significantly greater than to prevent the user from seeing one’s own content (5.2%; χ²=88.67, P<0.001). Across two survey experiments, 96 of 1,240 participants chose to block, and their reasons emphasize feed curation over self-protection or identity-based motives.
  • Content quality and asymmetry: Republican bots retweeted lower-quality, more politically slanted, and more toxic content than Democratic bots (FE1: Quality b=-0.473, P<0.001; Slant b=1.787, P<0.001; Toxicity b=0.118, P<0.001. FE2: Quality b=-0.639, P<0.001; Slant b=1.492, P<0.001; Toxicity b=0.196, P<0.001). A separate survey (n=3,057) showed Democrats were more likely to block users sharing inaccurate/false, mean/nasty, or racist/sexist/hate speech content (P<0.001), whereas Republicans were more likely to block users posting “woke/cancel culture” content or doubting God’s existence (P<0.001). Participants were more likely to block praise of the out-party than criticism of the in-party (P<0.001), especially among Democrats. Democrats were also more likely to block an explicitly toxic trolling account in a supplementary experiment.
  • Comparative magnitude: Effects on blocking were as large or larger than on follow-back. For example, in FE2, copartisan follow-back was 1.47x vs neutral, whereas counter-partisan blocking was ~4x vs neutral, underscoring the importance of tie prevention in shaping network assortment.
Discussion

The findings demonstrate that proactive prevention of social ties via blocking is a major causal contributor to political assortment on Twitter, complementing and in some cases exceeding the impact of preferential tie formation. Users systematically block counter-partisan accounts at much higher rates than copartisan or neutral accounts, directly addressing the research question about selective tie prevention. The presence of a robust partisan asymmetry in the field—Democrats blocking Republicans more than vice versa—appears to be driven by differences in the content shared by the bot accounts (lower quality, greater slant, and higher toxicity from right-leaning sources) rather than identity alone. Survey evidence indicates that users primarily block to curate the content they see, suggesting informational rather than purely affective or defensive motives. This reframes blocking as an information-filtering behavior that shapes exposure and network structure, implying that models of online political dynamics must incorporate tie prevention alongside initiation and dissolution. Practically, the heightened likelihood of counter-partisan blocking suggests that interventions or information campaigns may achieve greater reach when delivered via congenial or neutral messengers, as cross-partisan messengers are more likely to be blocked before engagement.

Conclusion

This paper shows that selective blocking of counter-partisan accounts is a key, previously underappreciated mechanism driving political assortment on social media. Across two Twitter field experiments and a survey replication, counter-partisan accounts were blocked far more often than copartisan or neutral accounts, with effects comparable to or larger than those for follow-back. The observed partisan asymmetry in the field—greater counter-partisan blocking by Democrats—likely reflects content differences rather than identity-based preferences alone. Users primarily block to avoid seeing others’ content, highlighting informational curation as a dominant motivation. Future research should: (i) incorporate tie prevention into formal models and simulations of network evolution and information flow; (ii) test generalizability across platforms and contexts, including varied demographic presentations of accounts; (iii) examine downstream effects of selective blocking on knowledge, toxicity, and polarization; and (iv) evaluate platform design choices that alter the costs and forms of tie prevention (e.g., blocking vs muting) and their societal implications.

Limitations
  • Bot demographics: Experimental accounts presented as White, male profiles; results may differ by perceived race, gender, or other demographics.
  • Platform and temporal scope: Studies conducted on Twitter (now X) with politically active users in 2020 and 2022; results may not generalize to other platforms or to current platform norms.
  • Sample representativeness: Field users and survey participants (Twitter users recruited via Lucid) are not representative of all Twitter users or the U.S. population.
  • Context dependence: Blocking magnitudes may vary with offline political events (elections) or platform policy changes.
  • Downstream outcomes: The work does not directly assess the consequences of selective blocking on information quality, toxicity, or affective polarization.
  • Motivation heterogeneity: Although content curation dominates stated reasons, blocking may serve other functions (e.g., harassment reduction) for different account types or contexts.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny