Political Science
Neutral bots probe political bias on social media
W. Chen, D. Pacheco, et al.
The study investigates how social media platform mechanisms and user interactions shape exposure to political information, polarization, and echo chambers on Twitter. Prior work shows social feeds can affect online expression and real-world actions, and that users are sensitive to early social influence. Polarization and echo chambers have been observed around contentious topics such as elections, vaccines, and climate change. Biases can arise from socio-cognitive factors (e.g., confirmation bias and homophily) and platform algorithms (e.g., engagement-driven ranking), with concerns that recommendation systems may steer users toward extreme or misleading content. However, it remains unclear how collective interactions mediated by platforms contribute to ecosystem-level bias, especially given manipulation by bots and trolls. To isolate ecosystem effects from individual-level confounders (e.g., demographics, ideology), the authors deploy neutral, algorithm-controlled Twitter accounts (“drifters”) that differ only in their initial friend’s political alignment. Research questions: (i) How are influence and exposure to inauthentic accounts, political echo chambers, and misinformation impacted by early actions on Twitter? (ii) Can differences be attributed to political bias in the platform’s news feed?
The paper situates its work within literature on polarization and echo chambers in online discourse, documenting ideological segregation and its links to radicalization and misinformation. It reviews socio-cognitive biases (confirmation bias, homophily) and algorithmic biases (engagement-based amplification) that shape online exposure, and cites concerns that recommendation systems (e.g., YouTube) may drive users toward extreme content. It also references evidence of manipulation by bots and trolls in political contexts, and prior measures of online bubbles and exposure biases. Collectively, these studies motivate probing the net bias of the social media information ecosystem while separating user-level confounds from platform and interaction effects.
Design: The authors created 15 neutral “drifter” Twitter bots divided into five groups (3 per group). Each group was initialized by following one popular U.S. news source aligned with a distinct position on the political spectrum: Left (The Nation), Center-Left (The Washington Post), Center (USA Today), Center-Right (The Wall Street Journal), and Right (Breitbart News). Each drifter then added 10 more accounts by randomly following five English-speaking friends and five English-speaking followers of the initial source (11 initial accounts total). With the exception of the first friend’s alignment, all drifters shared the same stochastic behavior model. Behavior model: Drifters executed random actions (tweet, retweet, like, reply, follow, unfollow) drawn according to predefined probabilities and conditional source choices (e.g., home timeline, trends, tweets liked by friends). Inter-action intervals Δt were sampled from a power-law P(Δt) ∝ Δt^-0.9 with a maximum of 7 hours, scaled to 20–30 actions/day. Drifters were inactive between midnight and 7 a.m. local time. Constraints on follow/unfollow ratios prevented spam-like behavior. Non-English sources were excluded when identifiable. Accounts were clearly labeled as bots to avoid impersonation. The protocol was approved by the Indiana University ethics board with a waiver of informed consent; only 15 drifters were deployed to minimize potential harm. Data collection: The deployment ran from July 10, 2019 to December 1, 2019, with daily data collection. Measured outcomes included: (1) follower growth; (2) echo-chamber indicators (ego-network density and transitivity); (3) bot-likeness of friends and followers via Botometer; (4) exposure to low-credibility content based on curated domain lists; (5) political alignment of content consumed and produced, and news feed bias. Political alignment metrics: Alignment scores ranged from −1 (liberal) to +1 (conservative), computed using two independent approaches:
- Hashtag-based: word2vec embeddings trained on 2018 U.S. midterm political tweets; the political axis defined between #voteblue and #votered; projection scaled to [−1,1]. Hashtags appearing fewer than five times were removed (54,533 vectors retained).
- Link-based: domains extracted from URLs were mapped to partisan scores derived from sharing by Twitter accounts linked to registered U.S. voters (∼19k news sources scored in [−1,1]). Tweet- and user-level scores were computed by averaging entity scores. Daily account-level metrics: exposure alignment s_h from the 50 most recent home timeline tweets; expressed alignment s_e from the drifter’s 20 most recent tweets; friends’ expressed alignment s_f from 500 collective recent tweets by friends. News feed bias was measured as s_f − s_h. Low-credibility sources: A domain was labeled low-credibility if it met any of: (1) Shao et al. list; (2) Grinberg et al. “Black,” “Red,” or “Satire”; (3) Pennycook et al. “fake news” or “hyperpartisan”; (4) Bovet et al. “extreme Left,” “extreme Right,” or “fake news”; yielding 570 sources. Links in drifter home timelines were expanded and matched to this list. Breitbart was not labeled low-credibility for this analysis. Echo chamber measurement: For each drifter, an ego network was approximated by sampling 100 random neighbors from the latest 200 friends and 200 followers. Undirected ties between sampled neighbors were added if a follow existed in either direction. Density and transitivity were computed; transitivity was also normalized against configuration-model shuffles preserving degree sequences. Statistical analysis: Two-sided t-tests compared follower growth rates across groups, bot score distributions, and ego-network measures. Paired t-tests compared daily s_h vs s_f to assess news feed bias. Effect sizes (Cohen’s d) and p-values are reported for key comparisons. Robustness checks included alternative denominators for low-credibility proportions and correlations between initial friend characteristics and drifter outcomes (details in Supplementary).
Influence (followers): Partisan-initialized drifters gained more followers than Center drifters. Left vs Center: d.f.=774, t=5.13, p<0.001. Right vs Center: d.f.=773, t=8.00, p<0.001. Right-leaning drifters grew faster than Left-leaning drifters (d.f.=771, t=3.84, p<0.001). Drifter influence correlated with the initial friend’s popularity among ideologically aligned accounts, but not with the initial friend’s overall influence. Echo chambers: Right drifter ego networks had higher density than Center (d.f.=4, t=8.28, p=0.001); Left vs Center density was marginal (d.f.=4, t=2.68, p=0.055). Transitivity was higher for Right vs Center (d.f.=4, t=9.31, p<0.001) and for Left vs Center (d.f.=4, t=−3.53, p=0.024). After normalizing for density, Right remained more clustered than Center (d.f.=4, t=8.96, p<0.001); Left vs Center was not significant (d.f.=4, t=2.73, p=0.053). Right vs Left: Right had stronger echo chambers (density: d.f.=4, t=3.84, p=0.019; transitivity: d.f.=4, t=−3.02, p=0.039), while normalized transitivity difference was not significant (d.f.=4, t=0.60, p=0.579). Neighbors’ shared content generally matched the initial friend’s alignment; Left drifters and neighbors tended toward the center. Exposure to automated accounts: Followers were more bot-like than friends across groups. Among friends, partisan drifters followed more bot-like accounts than centrists: Right vs Center (d.f.=618, t=6.14, p<0.001); Left vs Center (d.f.=486, t=3.67, p<0.001). Right vs C.Right (d.f.=735, t=3.01, p=0.003); Left vs C.Left (d.f.=541, t=2.56, p=0.011). Right partisans followed slightly more bot-like accounts than Left partisans (d.f.=694, t=2.33, p=0.020). Exposure to low-credibility content: Right drifters’ timelines contained the highest proportion of low-credibility links, nearly 15%. Differences were significant vs all other groups (Right vs C.Right: t=5.06, p=0.007; vs Center: t=27.47, p<0.001; vs C.Left: t=15.06, p<0.001; vs Left: t=13.14, p<0.001; d.f.=4 in all). Results were robust to alternative denominators. Breitbart was excluded from the low-credibility list to avoid biasing results. Political alignment trajectories: Right-initialized drifters remained conservative in both exposure (s_h) and expression (s_e). Left-initialized drifters drifted toward the center, becoming exposed to and sharing more moderate/conservative content. These patterns were consistent using both hashtag- and link-based alignment. News feed bias: Comparing friends’ output (s_f) to home timelines (s_h), little evidence of systematic political bias in the news feed. Hashtags: small centrist shift for Right group (paired t-test p<0.001, Cohen’s d=0.56). Links: Left bias for Center group (p<0.001, Cohen’s d=0.76). Effects for other groups were small.
The findings indicate that early following choices strongly shape subsequent experiences on Twitter. Despite neutral behavior, drifters developed partisan-dependent differences due to interactions mediated by the platform. Right-initialized drifters became embedded in dense, homogeneous communities consistently exposing them to and prompting them to share Right-leaning content, a feedback loop that may contribute to radicalization when combined with cognitive biases. Conservative echo chambers were denser and included more politically active accounts than centrist or liberal ones. Exposure to low-credibility content was markedly higher for Right-initialized drifters, aligning with prior observations of asymmetric engagement with misinformation. Bots likely contributed: Right-leaning drifters followed more automated accounts, which are known to amplify low-credibility content. Analyses found little consistent evidence of partisan bias in Twitter’s news feed curation; home timeline content largely mirrored friends’ outputs with only small group-specific deviations. Differences in drifter influence and embedding could not be explained by the initial friend’s overall influence but were associated with popularity within ideologically aligned communities, suggesting that echo-chamber structure and partisanship (especially on the Right) drive influence gains. Overall, ecosystem dynamics—user behavior, bot activity, and community structure—can lead neutral agents into partisan echo chambers and exposure to misleading information, even absent platform-level partisan bias.
The study introduces a neutral-bot methodology to audit social media information ecosystems while controlling for user-level confounds. It shows that early connections strongly steer exposure and behavior: neutral agents drift into echo chambers, with conservative-initialized accounts gaining more followers, embedding in denser communities, following more automated accounts, and encountering more low-credibility content. The analysis finds no strong or consistent partisan bias in Twitter’s news feed curation; ecosystem dynamics and possibly unintended policy effects explain observed asymmetries. Neutral algorithms do not necessarily yield neutral outcomes when embedded in partisan user networks. Future research directions include deploying larger numbers of bots, varying initial sources by popularity, activity, and political slant; assessing impacts of major policy/enforcement changes (e.g., takedown of superspreaders), migration to other platforms, and generalization to platforms with different demographics or partisanship; and extending the approach beyond U.S. politics to other countries and to other biases (gender, race, hate speech, algorithmic bias). Designing mechanisms to mitigate emergent biases in online ecosystems remains an open challenge.
- Data access limits: News feed analysis used limited sets of recent tweets from home timelines via the Twitter API; the platform’s personalized ranking and recommendations, friend suggestions, suspensions, and ads could not be evaluated.
- Sample size and scope: Only 15 drifters were deployed due to ethical constraints, limiting statistical power and control over confounders related to initial source selection (influence, popularity among aligned users, activity).
- Potential confounding by initial sources: Differences in initial accounts (e.g., @FoxNews inactivity; Breitbart’s partisanship) may affect outcomes; although Breitbart was not labeled low-credibility, conservative partisanship and misinformation are correlated.
- External validity: The study is U.S.-centric and focused on Twitter in 2019; findings may not generalize to other platforms or political contexts.
- Behavioral realism: Drifter behaviors were neutral and stochastic, not fully realistic human behavior; they did not comprehend content.
- Ethical considerations: Even neutral bots can inadvertently share misinformation or reinforce echo chambers; deployment was minimized and constrained to reduce potential harm.
- Real-world effects: The study measures online behaviors and cannot infer impacts on offline attitudes or actions.
Related Publications
Explore these studies to deepen your understanding of the subject.

