Political Science
How do social media feed algorithms affect attitudes and behavior in an election campaign?
A. M. Guess, N. Malhotra, et al.
Moving consenting users from algorithmic Facebook and Instagram feeds to reverse-chronological timelines during the 2020 US election cut time on platform and activity, increased exposure to political and untrustworthy content, and changed the mix of civic content—yet left polarization, political knowledge, and attitudes largely unchanged over three months. Research conducted by Authors present in <Authors> tag.
~3 min • Beginner • English
Introduction
The study examines how machine-learning feed-ranking algorithms used by social media platforms affect political attitudes and behaviors during an election campaign. Public and scholarly debates have posited that such algorithms may create filter bubbles, promote polarization, exacerbate inequalities, and facilitate disinformation, but the systems are opaque and their effects contested. Feed-ranking systems personalize the order of content based on user behavior and predictions, potentially producing complex and heterogeneous impacts on exposure. To assess the total impact of these algorithms, the authors use reverse-chronological ordering as a well-understood counterfactual, allowing isolation of the role of algorithmic ranking. The primary research question is how a Chronological Feed affects the content people see, with hypotheses focused on polarization (issue and affective), political knowledge, and political participation.
Literature Review
Prior quantitative research often treats social media as a bundle of features (algorithms, social interactions, reshares) or manipulates exposure to specific sources rather than isolating algorithmic ranking effects. Theoretical concerns include algorithmic reinforcement of like-minded content (selective exposure), echo chambers/filter bubbles, and impacts on affective polarization (outgroup attitudes). Research also highlights incidental exposure to news on social media and the potential for engagement-optimized systems to shape information consumption. Studies show social media can influence knowledge and participation, lower mobilization costs, and that cross-cutting exposure may demobilize by generating ambivalence. The paper situates its experiment within these debates, aiming to attribute observed political effects specifically to personalized feed-ranking algorithms.
Methodology
The authors conducted randomized controlled experiments on Facebook and Instagram during the 2020 US presidential election. Recruitment occurred via survey invitations placed atop users' feeds in August 2020. Eligible participants resided in the United States, were at least 18 years old, consented to data collection, and were invited to complete five surveys (late Aug, mid-Sep, mid-Oct, post–Election Day Nov, mid-Dec 2020), share platform activity, and allow passive tracking of off-platform internet activity. Participants could withdraw at any time until data were delinked from identifiers. Participants were randomized to either: (1) Algorithmic Feed (status quo control) or (2) Chronological Feed treatment, active 24 September to 23 December 2020, ranking posts strictly by most recent publication time. Selection and placement of posts from connected accounts (friends, Pages, Groups) were affected; advertisements were not. Approximately 80% of material presented to respondents was manipulated by the intervention. Sample sizes: Facebook n = 23,391; Instagram n = 21,373; power sufficient to detect small effects (e.g., Cohen’s d ≈ 0.032 for affective polarization). Attrition (19.5% with no posttreatment surveys) was not significantly different by condition (Facebook p = 0.83; Instagram p = 0.35). The design, measures, and analyses were preregistered at OSF. The main estimand was the population average treatment effect (PATE), using survey weights based on predicted ideology, friend count, number of political pages followed, activity days, and other variables; unweighted sample average treatment effects (SATE) were also reported. Effect heterogeneity was generally limited. Engagement, user satisfaction, and news originality are key signals used by the Facebook feed algorithm. Analyses included on-platform behavioral metrics (time spent, likes/comments, exposure composition, network coverage), off-platform substitution (mobile and browser usage), content classification (political content, ideological source categories, untrustworthy sources, incivility, slur words), and survey-based outcomes (polarization, knowledge, participation, attitudes).
Key Findings
- Time spent and engagement:
- Facebook: Algorithmic group spent 73% more time per day than US monthly active users; Chronological group 37% more (p < 0.005). Like rate: Algorithmic 6.7% of exposures vs Chronological 3.1% (p < 0.005).
- Instagram: Algorithmic group spent 107% more time vs US monthly active users; Chronological group 84% more (p < 0.005). Likes and comments were significantly lower in Chronological (p < 0.005).
- Substitution: Instagram users in Chronological increased mobile time on TikTok by 36% (2.19 hours) and YouTube by 20% (5.63 hours) over the study period (p < 0.05). Facebook users in Chronological increased time on Instagram by 17% (1.24 hours) (p < 0.05). Browser: For Facebook, reddit.com visits +52% (16.2 visits, p < 0.005) and youtube.com +21% (50.1 visits, p < 0.05).
- Exposure and network composition:
- Facebook: Share of content from friends decreased by 24 percentage points; coverage of friends/Pages/Groups seen decreased (p < 0.005).
- Instagram: Share of content from mutual follows decreased by 5 percentage points; coverage of mutual follows decreased (p < 0.005).
- Content mix (average unweighted proportions):
- Facebook:
- Political content: Algorithmic 0.135 vs Chronological 0.155 (+15.2%).
- Cross-cutting sources: 0.207 vs 0.187 (−9.3%).
- Like-minded sources: 0.537 vs 0.481 (−10.4%).
- Moderate/mixed sources: 0.226 vs 0.309 (+36.7%).
- Political news: 0.062 vs 0.087 (+39.5%).
- Untrustworthy sources: 0.026 vs 0.044 (+68.8%).
- Uncivil content: 0.032 vs 0.018 (−43.0%).
- Slur words: 0.00034 vs 0.00019 (−44.1%).
- Instagram:
- Political content: 0.053 vs 0.056 (+4.8%).
- Untrustworthy sources: 0.013 vs 0.016 (+22.1%); erratum indicates control likely 0.014, not 0.013.
- Uncivil content: 0.016 vs 0.016 (no meaningful change).
- Slur words: 0.00024 vs 0.00025 (+4.2%).
- Primary outcomes (PATE, SD units, adjusted for multiple comparisons):
- No significant differences for affective polarization, issue polarization, election knowledge, or news knowledge on either platform (p ≈ 1 or p > 0.63).
- Self-reported political participation and turnout: No significant effects on Facebook (p = 1.0 for participation; p = 1.0 for turnout) or Instagram (p = 1.0 for participation; p = 0.64 for turnout).
- On-platform political engagement decreased: Facebook −0.117 SD (p < 0.005); Instagram −0.090 SD (p < 0.005).
- Secondary outcomes:
- No significant effects on factual discernment, trust in traditional/social media, confidence in institutions, perceived polarization, epistemic political efficacy, belief in election legitimacy, or support for political violence.
- Exception (Facebook): Increased clicks on political news from likely partisan sources (+0.107 SD, p < 0.01), driven by increased exposure to partisan news links.
Discussion
Algorithmic feed-ranking systems strongly shape users' on-platform experiences: the Chronological Feed reduced time spent and engagement, shifted exposure toward Pages/Groups (on Facebook), increased political and untrustworthy content, and decreased uncivil and slur-containing content. Despite these substantial changes, there were no detectable effects on individual-level political attitudes (affective or issue polarization), knowledge, or offline participation. Potential explanations include the need for longer interventions, the unique dynamics of an election campaign period in a polarized context, or differences that might arise with alternative ranking systems or in other political environments. The intervention affected multiple aspects of experience (time on platform, network exposure, content types, and cross-platform substitution), which may have offsetting downstream effects. The design estimates direct effects on treated individuals and cannot address general equilibrium dynamics whereby ranking algorithms alter content producer behavior and broader network feedback. Findings temper expectations that feed-ranking algorithms directly cause polarization or major shifts in knowledge/participation, while highlighting that algorithms do influence content exposure and online political engagement.
Conclusion
Replacing Facebook’s and Instagram’s machine-learning feed-ranking algorithms with reverse-chronological ordering substantially altered user experience and reduced on-platform political engagement but did not produce detectable changes in polarization, political knowledge, or offline participation during the 2020 US election period. These results suggest that feed algorithms may not be the primary drivers of observed trends like rising polarization at the individual level, underscoring the need to investigate other online and offline factors (e.g., advertising incentives, partisan media, demographic and geographic changes). Future research should examine potential effects on agenda setting, users’ curation habits, mobilization and coordination mechanisms, and system-wide (general equilibrium) impacts, as well as longer-term interventions and different political contexts.
Limitations
The intervention lasted approximately three months; longer exposure might be necessary to detect downstream effects. The study occurred during a highly salient US election campaign, limiting generalizability to other periods or countries. The design assesses direct effects on individuals and cannot capture general equilibrium dynamics (e.g., changes in content producer behavior or network feedback). Proposed mechanisms (e.g., knowledge change, cross-cutting exposure effects, mobilization cost changes) could not be tested as mediators because they occur post-treatment. Multiple aspects of user experience changed simultaneously (time on platform, network exposure, content types, cross-platform substitution), potentially yielding offsetting effects on attitudes and behavior. Sample participants were more active than average users; while weighting addresses representativeness, estimates may reflect highly engaged subsets. Erratum (posted 12/05/2024) notes a processing error affecting Instagram metrics for exposure to and engagement with untrustworthy sources due to deleted accounts being retroactively excluded when datasets were created in 2022; the control-group proportion is likely 0.014 (not 0.013), with limited impact on treatment effects and no effect on Facebook metrics.
Related Publications
Explore these studies to deepen your understanding of the subject.

