Psychology
Leveraging artificial intelligence to identify the psychological factors associated with conspiracy theory beliefs online
J. R. Kunst, A. B. Gundersen, et al.
This fascinating study explores the psychological factors behind the proliferation of conspiracy theories on social media. By analyzing data from over 2,500 Twitter users and 7.7 million interactions during the COVID-19 pandemic, the research by Jonas R. Kunst and colleagues identifies key risk factors such as age and political extremism, shedding light on how misinformation spreads online.
~3 min • Beginner • English
Introduction
The study investigates which psychological and demographic factors are associated with real-world online support for COVID-19 conspiracy theories. Motivated by the societal harms of conspiracy theories and a literature dominated by self-reports that may not map onto actual behavior, the authors integrate survey measures with large-scale behavioral data from Twitter (X). They aim to determine whether individual differences (e.g., narcissism, denialism, need for chaos), political alignment and extremity, and constructs derived from the Theory of Reasoned Action (trust in social media information, perceived ability to recognize misinformation, importance of verifying information), as well as general conspiracy mentality and misinformation susceptibility, predict behavioral endorsement and spread of six COVID-19 conspiracy theories. The work addresses a key gap by linking validated psychological measures to millions of observed social media engagements to better identify risk and protective factors and inform interventions.
Literature Review
Prior research links various individual differences to conspiracy beliefs. Narcissism has shown consistent positive associations with conspiracy beliefs, potentially due to needs for attention and uniqueness. Denialism—rejection of expert narratives—has been repeatedly tied to COVID-19 conspiracy beliefs, and a need for chaos is associated with spreading hostile political rumors. Political ideology plays a central role: people at both ideological extremes tend to endorse conspiracy theories, with right-leaning individuals particularly drawn to theories targeting marginalized groups, though neither side uniquely monopolizes conspiracy thinking. Theory of Reasoned Action applications to misinformation suggest that trust in social media content and confidence in identifying false information correlate positively, and the importance placed on verification correlates negatively, with self-reported misinformation sharing; overconfidence in judgment has been linked to susceptibility, though inoculation interventions can increase both skill and confidence. Psychometric tools include the Conspiracy Mentality Questionnaire (measuring a general conspiracist predisposition) and the Misinformation Susceptibility Test (MIST), which captures acceptance of false and rejection of true politically balanced items; higher misinformation susceptibility has been related to COVID-19 conspiracy beliefs. The field shows mixed conclusions regarding the relative importance of personality versus distrust/government perceptions. A key limitation is reliance on self-reports without behavioral validation, motivating studies that combine psychological assessment with large-scale social media behavior.
Methodology
Design and ethics: Approved by the Bioethics Committee of Jagiellonian University (No 1072.6120.12.2022). Informed consent obtained.
Participants: N = 2506 U.S.-based Twitter (X) users recruited via CloudResearch (survey Aug 2022–Feb 2023). Sample broadly comparable to U.S. Twitter users in age, employment, civil status, education, and race; women and Democrats slightly overrepresented. Inclusion criteria for Twitter accounts: public, ≥10 engagements in the past year, account age ≥2 years.
Data sources and period: Twitter academic API used to retrieve historical activity from Dec 1, 2019 to Dec 31, 2021. Total engagements analyzed: 7,713,506 (likes 3,607,354; posts 1,012,565; replies 1,084,863; reposts 2,008,724).
Psychometric instruments (Likert unless noted): Political orientation (1 extremely left to 11 extremely right); political affiliation (Democrat, Republican, Independent, Something else); Need for Chaos (8 items, α=0.89); Narcissism (4 items, α=0.88); Denialism (4 items, α=0.81); Conspiracy mentality (5 items, α=0.84); Misinformation susceptibility (MIST-8): belief in false information (α=0.61) and disbelief in true information (α=0.47) via binary real/fake judgments; Theory of Reasoned Action variables: belief in social media information reliability (3 items, α=0.82), importance of verifying information (5 items, α=0.86), perceived self-efficacy in recognizing misinformation (5 items, α=0.78).
Outcome definition: Behavioral support for six overarching COVID-19 conspiracy theories, measured at the engagement level (likes, posts, replies, reposts) using NLP classification. Theories included: (1) deliberate strategy to create economic instability/benefit large corporations; (2) public intentionally misled about virus nature/prevention; (3) virus human-made/bioweapon; (4) politicians/government intentionally spreading false information/other motives; (5) China intentionally created/spread the virus to harm other countries; (6) vaccines unsafe or for control/population reduction. Supporting engagements amplify spread via platform algorithms.
NLP pipeline: Step 1—Similarity filtering using sentence-transformers/all-mpnet-base-v2 (768-d embeddings), squared Euclidean distance similarity; FAISS index; similarity threshold set at 0.25 to maximize inclusivity and minimize Type II errors. Human validation: three raters annotated 1,019 engagements across similarity bins; Cronbach’s alpha high across theories; mean human topicality ratings moderately to strongly correlated with model similarity; threshold 0.25 captured all human-annotated ground-truth positives. Counts above threshold subjected to support classification: Theory1 462,452; Theory2 326,380; Theory3 369,568; Theory4 565,274; Theory5 291,514; Theory6 381,835.
Step 2—Support classification using GPT-3.5 (gpt-3.5-turbo-0125, temperature=0) with tailored prompts per theory to label support as YES/NO. Ground truth for validation: 1,136 English engagements (200 per theory targeted; oversampling higher similarity), three human raters; human agreement substantial to excellent. GPT vs human majority agreement substantial to excellent; performance per theory: precision ~0.59–0.83, recall ~0.61–0.85, F1 ~0.68–0.76 (higher for most theories; see Table 8). All engagements with similarity <0.25 set to NO per validation.
Statistical analysis: Multi-level Bernoulli generalized linear mixed models (glmmTMB in R) estimated separately per theory with engagements (level 1) nested in participants (level 2). Predictors at level 2: demographics, political orientation (linear and quadratic), party affiliation, personality and TRA variables, MIST subscales. Controls: follower count and following count. Predictors (continuous) standardized; one-tailed tests for directional hypotheses; Holm correction for multiple comparisons. ICCs and R² reported; model diagnostics via DHARMa. For some models (3,5) underdispersion indicated conservative tests.
Policy context: Most engagements occurred while Twitter actively combated/banned COVID-19 misinformation, likely reducing detectable positives.
Key Findings
Support frequency across theories was low relative to total engagements (7.7 million), but differed significantly by content: governments/politicians spreading misinformation (n=20,705), public intentionally misled (n=13,354), deliberate strategy to create economic instability/benefit corporations (n=2,054), vaccines unsafe/population control (n=1,167), human-made/bioweapon (n=732), China intentionally spread virus (n=153); χ²(5)=58,280, P<0.001, Cramer's V=0.035.
Associations from mixed models (odds ratios reflect standardized predictors):
- Age: Older participants’ engagements more likely to support multiple theories—economic instability/corporate benefit OR=1.86 (+86%); public intentionally misled OR=2.03 (+103%); human-made/bioweapon OR=2.14 (+114%); governments/politicians spread false info OR=2.36 (+136%); vaccines unsafe/population control OR=2.02 (+102%).
- Political orientation: Quadratic (extremity) effects for theories 1, 2, and 4—greater support among those at both ends of the spectrum; particularly far left for economic instability/corporate benefit, and particularly far right for public intentionally misled; both ends elevated for governments/politicians spreading false info. Linear right-leaning effect for human-made/bioweapon (OR=1.62, 95% CI approx. 1.18–2.23). Party affiliation generally not significant.
- Misinformation susceptibility (MIST): Belief in false information positively predicted support for three theories—public intentionally misled OR=1.32; governments/politicians spreading false info OR=1.25; vaccines unsafe/population control OR=1.48. Disbelief in true information showed no significant associations.
- Denialism: Positively associated with support for the theory that the public is intentionally misled (OR=1.26).
- Perceived ability to recognize misinformation: Positively associated with support for governments/politicians spreading false information (OR=1.17). No robust interactions with actual misinformation identification skill and no curvilinear (overconfidence) effect detected.
- Conspiracy mentality, narcissism, and need for chaos did not emerge as significant predictors across models.
- Theory 5 (China intentionally spread virus) had very few positives; no predictors reached significance.
Explained variance was modest (marginal R² roughly 7%–22%); ICCs suggested about half the variance attributable to participant level for most theories; platform-level and structural factors likely contribute substantially.
Discussion
The findings address the central question of which psychological and demographic factors translate into behavioral support for conspiracy theories online. Integrating AI-based text classification with survey data, the study validates some self-report correlates at scale: older age, political extremity (both far left and far right), and a higher tendency to believe false information are robust risk factors. Political ideology effects were nuanced: economic exploitation narratives resonated more on the far left; deception about risks/prevention resonated more on the far right; human-made/bioweapon support increased linearly with right-leaning orientation; and claims that government spreads false information appealed to both extremes. Denialism predicted endorsement of claims that the public is being misled, supporting ecological validity of prior self-report findings. Notably, greater perceived ability to recognize misinformation was associated with endorsing narratives about government spreading false information, suggesting a potential divergence between broad self-efficacy beliefs and task-specific confidence fostered by inoculation interventions.
Contrary to expectations, conspiracy mentality, narcissism, and need for chaos did not reliably predict behavioral support in this dataset, underscoring the need to test proposed predictors against real-world behaviors. The modest explained variance and ICCs indicate that user traits only partially account for behavior, with platform architecture, content sources, and information ecosystems likely playing large roles. The study’s period overlapped with platform policies limiting COVID-19 misinformation, likely reducing detectable support and, particularly for the China-origin theory, contributing to null results. Overall, the work demonstrates the feasibility and value of combining psychological assessment with large-scale behavioral analysis to refine theories and target interventions for misinformation and conspiracy belief spread.
Conclusion
This study demonstrates that linking validated psychological measures with millions of social media engagements via AI classification can identify risk factors for the online endorsement and spread of COVID-19 conspiracy theories. Consistent predictors included older age, ideological extremity on both ends of the political spectrum, and a greater tendency to believe false information, with additional theory-specific effects for denialism, perceived misinformation-detection ability, and right-leaning orientation. Not all widely cited predictors (e.g., conspiracy mentality, narcissism, need for chaos) generalized to behavioral outcomes. These findings urge moving beyond self-report-only approaches and suggest tailoring interventions to older users and ideologically extreme groups while addressing susceptibility to false information.
Future research should: (1) examine causal pathways and feedback loops between online engagement and belief/polarization; (2) incorporate additional predictors (e.g., populism, Manicheanism, alignment with specific political figures) and platform/architecture-level variables; (3) test how inoculation-driven confidence relates to general self-efficacy without fostering overconfidence; and (4) evaluate downstream consequences of online conspiracy support across domains.
Limitations
- Causality cannot be inferred; bidirectional influences between beliefs, political identity, and online behavior are plausible.
- Platform moderation policies during much of the study period likely reduced observable conspiracy-supporting content, contributing to low base rates and especially few positives for the China-origin theory, limiting power for that model.
- Modest explained variance indicates unmeasured factors (e.g., platform architecture, network exposure, source credibility) likely play substantial roles.
- Some psychometric subscales had limited reliability (e.g., MIST disbelief in true information α=0.47), potentially attenuating associations.
- Underdispersion in some models suggests conservative inference; overall reliance on one-tailed, Holm-corrected tests may still miss small effects.
- Potential selection bias (participants consenting to data access), though average conspiracy mentality scores were typical; any such bias would likely also affect other measures.
- U.S.-based sample and COVID-19 timeframe may limit generalizability to other cultures, platforms, or topics.
Related Publications
Explore these studies to deepen your understanding of the subject.

