logo
ResearchBunny Logo
Measuring exposure to misinformation from political elites on Twitter

Political Science

Measuring exposure to misinformation from political elites on Twitter

M. Mosleh and D. G. Rand

Discover how Mohsen Mosleh and David G. Rand measured Twitter users' exposure to misinformation from political elites. Their study reveals intriguing correlations between misinformation exposure and user behavior, especially among conservatives, offering actionable insights into the digital media landscape.

00:00
00:00
Playback language: English
Introduction
The proliferation of misinformation on social media is a growing concern. Research has largely focused on the sharing of articles from unreliable news sources, but a more recent focus is on coordinated misinformation campaigns by public figures and organizations (elites). This study addresses this gap by focusing on exposure to misinformation, a concept distinct from the belief or sharing of misinformation. Exposure, determined by whom a user follows, profoundly shapes their information environment, even influencing perceived truthfulness. Previous work primarily examined what users believe and share, overlooking their exposure to misinformation. While related, exposure and sharing are distinct; most users share only a tiny fraction of what they are exposed to. This research introduces a novel method for assessing exposure to misinformation specifically from elites, aiming to supplement studies focusing solely on what users believe and share.
Literature Review
Existing research on social media misinformation predominantly assesses belief in and sharing of content from unreliable domains, often using blacklists of misinformation sites or continuous domain quality ratings from fact-checkers or crowds. While insightful, these approaches have limitations: they primarily measure post-exposure behaviors (sharing, clicking), ignore the impact of account following on shaping the information environment, suffer from the rapid turnover of fake news sites, and miss a significant portion of potentially misleading content without links. This paper directly addresses these limitations by focusing on exposure as a crucial antecedent to the sharing and belief of misinformation.
Methodology
This study uses a dataset of PolitiFact fact-checks to generate falsity scores for 816 elites (public figures and organizations) based on the veracity of their statements. A falsity score (0-1) is calculated for each elite, representing the proportion of false statements they made. Twitter users' misinformation-exposure scores are calculated by averaging the falsity scores of the elites they follow, weighted by the elites' tweeting frequency to account for exposure intensity. The authors collected data for 5000 randomly sampled Twitter users who followed at least three of the 816 elites, excluding protected accounts and those without sufficient tweet history. User political ideology was estimated from the ideological leanings of followed accounts using the methodology of Barberá et al. (2015). The quality of news shared by users was assessed by using domain-level trustworthiness ratings from two sources: professional fact-checkers and a politically balanced layperson crowd. A co-share network was created to analyze the domains preferentially shared by users with high misinformation-exposure scores. Community detection algorithms were used to identify clusters of domains. To investigate the relationship between ideological extremity and misinformation exposure, the authors used a regression model that interacted estimated conservative ideology with ideological extremity (absolute value of estimated ideology). Finally, the authors constructed co-follower and co-retweet networks for comparative analysis.
Key Findings
The study's key findings are as follows: 1. **Negative Correlation with News Quality:** Users with higher misinformation-exposure scores (i.e., following more elites with high falsity scores) shared news from lower-quality outlets, as rated by both professional fact-checkers (b=-0.728, p<0.001) and politically balanced laypeople (b=-0.540, p<0.001). This relationship remained robust even when controlling for estimated ideology. 2. **Positive Correlation with Conservative Ideology:** A positive correlation was observed between misinformation-exposure scores and estimated conservative ideology (b=0.747, p<0.001). The effect of ideology on news quality shared was significantly reduced when controlling for misinformation-exposure score, indicating that misinformation exposure is a significant predictor of shared news quality independently of ideology. 3. **Association with Toxic Language and Moral Outrage:** Higher misinformation-exposure scores were significantly associated with more toxic language (b = 0.129, p<0.001) and expressions of moral outrage (b = 0.107, p<0.001), even after controlling for estimated ideology. 4. **Co-Share Network Analysis:** Community detection analysis of the co-share network identified three clusters of domains: liberal, center-left, and conservative. The conservative cluster showed significantly higher average misinformation-exposure scores compared to the other two clusters (p < 0.001), even when controlling for ideology. This suggests the existence of "falsehood echo chambers." 5. **Ideological Asymmetry:** The relationship between ideological extremity and misinformation exposure was stronger for estimated conservatives than liberals. More ideologically extreme users were exposed to more misinformation, but this effect was significantly larger for conservatives (b = 0.756, p < 0.001 when estimating ideology from accounts followed; b = 0.415, p < 0.001 when using media sharing). This asymmetry held true when examining language toxicity and moral outrage as outcomes. These findings are consistent across different estimations of political ideology and robustness checks.
Discussion
The study's findings demonstrate the crucial role of exposure to misinformation from elites in shaping online information environments. The strong negative correlation between misinformation exposure and shared news quality, even after accounting for ideology, highlights the direct influence of elite misinformation on individual information consumption. This lends support to the notion that leaders' rhetoric drives followers' beliefs and policy positions. While misinformation exposure might seem passive, the study shows it's actively shaped by individual choices. The authors introduce a measure (misinformation-exposure score) for studying what influences users' account following choices, and potentially distinguishing the roles of algorithms and individual preferences. The analysis of co-share networks reveals the existence of "falsehood echo chambers," reinforcing the clustering of users based on both ideology and misinformation exposure. The observed asymmetry between conservatives and liberals reinforces existing literature on the link between extreme right-wing ideology and misinformation. The tool developed in this study provides a significant contribution to the field by offering a quantitative measure of users' exposure to misinformation from elites, facilitating future investigations.
Conclusion
This study offers a novel and readily applicable method for quantifying exposure to misinformation from political elites on Twitter. The findings underscore the impact of elite communication on users' information consumption and sharing patterns, highlighting the importance of user choices in shaping online information environments. The open-source tools provided empower further research into the predictors of misinformation exposure, the dynamics of falsehood echo chambers, and the interplay between algorithmic recommendations and user preferences. Future research could explore the causal relationship between exposure and belief in misinformation, investigate the role of other platforms, and broaden the scope beyond political elites.
Limitations
The study's reliance on PolitiFact's fact-checks introduces potential bias, which could be addressed by incorporating data from alternative sources. The methodology requires users to follow a sufficient number of rated accounts, potentially creating selection bias. Future work should investigate the characteristics of users excluded from the analysis due to insufficient account following. The study is limited to Twitter, not fully representative of the population. The measure of shared news quality relied on a limited subset of links, thus future research should expand the use of other metrics for misinformation sharing.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny