
Psychology
Democrats are better than Republicans at discerning true and false news but do not have better metacognitive awareness
M. Dobbs, J. Degutis, et al.
This groundbreaking study explores how age and political beliefs influence our ability to differentiate between truth and misinformation. Surprisingly, older adults and Democrats excel in distinguishing true from false news headlines compared to their younger and Republican counterparts. Conducted by leading researchers including Mitch Dobbs and Joseph DeGutis, this research reveals intriguing insights into our understanding of misinformation.
~3 min • Beginner • English
Introduction
The study addresses how well people discern true from false information and how aware they are of their own discernment ability (metacognition). In the current information ecosystem, both accurate discernment and awareness of one’s accuracy are critical, as poor discernment and low metacognitive insight may contribute to false beliefs and the spread of misinformation. The research examines whether discernment ability, metacognitive efficiency (metacognitive ability controlling for task performance), and response bias differ across demographics—particularly political partisanship, age, education, and gender. The authors preregistered three hypotheses: (i) stronger political partisans would show lower metacognitive ability than weaker partisans; (ii) younger adults would show higher metacognitive ability than older adults; and (iii) higher education would predict higher discernment ability. They also explored whether performance varies with the political favorability of headlines and whether poorer discerners have poorer metacognitive insight (a Dunning–Kruger-type effect).
Literature Review
Prior work on metacognition shows people generally have moderate insight into their abilities, but evidence regarding insight into misinformation detection is mixed. Lyons et al. found widespread overestimation of the ability to detect made-up news with very low correlations between perceived and actual ability, especially among low performers; overestimation was linked to greater engagement with low-quality content. Salovich and Rapp similarly found overestimation in detecting inaccuracies. Conversely, Fischer et al. reported effective discernment of COVID-19 information and good metacognitive insight (mean m-ratio near 1), potentially due to trial-by-trial confidence assessment. Arin et al. suggested people have reasonable assessments of whether they share false news, though their focus was sharing rather than discernment per se. Demographically, prior studies often find Democrats outperform Republicans in discernment regardless of content, with strong Republicans sometimes performing particularly poorly; both ideological extremes may overestimate precision in political knowledge, and radical beliefs have been linked to lower metacognitive ability on perceptual tasks. Older adults sometimes outperform younger adults in news discernment, but metacognitive abilities generally decline with age in other domains. Education often predicts better discernment but can also be associated with overconfidence. Gender differences are limited; some work suggests men may be more accurate at detecting fake news, while metacognitive ability differences by gender are minimal. Methodologically, earlier misinformation metacognition studies often used few items and global self-assessments relative to others, which can be unreliable. The meta-d’ (SDT) framework offers advantages by separating discernment, metacognitive ability, and response bias, but prior applications had low trial counts; larger trial numbers can improve precision and reliability.
Methodology
Design: Participants judged the truth of news headlines and rated confidence on each trial, enabling estimation of discernment ability (d′), response bias (c), and metacognitive efficiency (m-ratio = meta-d′/d′) within a signal detection theory (SDT) framework. Pre-registration was hosted on OSF (https://osf.io/ay9fc/). Ethical approval: Northeastern University IRB (#19-04-09). Participants: Recruited 533 via Prolific with quotas for gender (men, women) and partisanship (Democrat, Republican) within four age bins (18–32, 33–47, 48–62, 63+). Exclusions: lack of effort (N=5), incomplete responses (N=15), extreme outliers (N=9) per outlier labeling rule, and negative m-ratio values (N=4), yielding N=500 (247 men, 252 women, 1 undisclosed; age 18–84, M=47.2, SD=16.5). For equated-stimuli metacognition analyses, further exclusions for negative/extreme m-ratio (n=9; N=491). For political favorability metacognition analyses, more exclusions (n=40; N=460). Compensation: $13.70/hour. Stimuli: Headlines reflected topics from within one year of data collection (Aug 17, 2022). False items sourced/adapted from fact-checkers (Snopes, PolitiFact); true items adapted from mainstream outlets (CNN, NPR, Fox News). Final main set: 140 items (70 true, 70 false). Political favorability in the main set was imbalanced: more false items favorable to Republicans (n=54) than Democrats (n=16), and more true items favorable to Democrats (n=39) than Republicans (n=31). An equated subset (64 items) balanced favorability: 16 true and 16 false pro-Democrat (MTrue=2.62, SDTrue=0.13; MFalse=2.59, SDFalse=0.33) and 16 true and 16 false pro-Republican (MTrue=3.38, SDTrue=0.36; MFalse=3.39, SDFalse=0.48). Five additional items assessed variance explained by the Misinformation Susceptibility Test (MIST). Procedure: For each headline, participants indicated truth (Yes/No) and rated confidence on a 4-point scale (1 Not Confident, 2 Barely Confident, 3 Somewhat Confident, 4 Very Confident). Demographics collected afterward (age, political affiliation/strength, education, gender). Analysis: SDT used to compute type-1 hit/false alarm rates to estimate d′ (discernment) and c (response bias). Confidence ratings were binarized (3–4 high, 1–2 low) to compute type-2 hit/false alarm rates and estimate meta-d′, yielding m-ratio = meta-d′/d′ as metacognitive efficiency (values near 1 indicate optimal efficiency). Meta-d′ estimation used a non-hierarchical Bayesian approach implemented via JAGS with three MCMC chains (10,000 samples each; first 2,000 warmup), priors per Fleming: d′ ~ Normal(0,0.5), c ~ Normal(0,2), meta-d′ ~ Normal(d′,0.5). Convergence assessed via Gelman–Rubin (all R<1.01). For robustness in partisanship analyses, meta-d′ was also estimated hierarchically. Frequentist analyses and modeling were conducted in R; Bayes Factors in JASP. Power: Planned N=500 with 140 trials to robustly estimate metacognition; sensitivity analysis indicated 95% power to detect effects of f≈0.19 for key ANOVAs. Planned analyses tested preregistered hypotheses; exploratory analyses examined political favorability effects and the relationship between accuracy quartiles and metacognitive efficiency.
Key Findings
Overall performance: Accuracy was 78.9% for true items and 81.3% for false items; mean d′=1.82 (SD=0.62). Metacognitive efficiency was high: mean m-ratio=0.86 (SD=0.29). Discernment by partisanship: 2×2 ANOVA (party × strength) on d′ showed a strong main effect of partisanship, F(1,494)=172.87, p<0.001, ηp²=0.26, with Democrats more accurate than Republicans. There was an interaction with partisanship strength, F(1,494)=31.99, p<0.001, ηp²=0.06: weak Republicans > strong Republicans, t(235)=2.75, p=0.007, d=0.35; strong Democrats > weak Democrats, t(129)=4.71, p<0.001, d=0.69. This pattern held using politically equated items. Discernment by age: One-way ordinal ANOVA showed a main effect, F(3,496)=2.72, p=0.044, ηp²=0.02; older adults (63+) outperformed younger (18–32), t(247)=2.35, p=0.019, d=0.29. Effect was marginal on equated items (F(3,487)=2.60, p=0.052; BF1=3.77). Age correlated positively but modestly with d′ (r≈0.11, p=0.016). Education and gender: d′ correlated with education (r≈0.25, p<0.001). Men had higher d′ than women, t(496)=4.20, p<0.001, d=0.38. Multiple regression predicting d′ showed unique contributions of partisanship (β=0.52, SE=0.05, p<0.001), education (β=0.19, SE=0.02, p<0.001), gender (β=0.18, SE=0.05, p<0.001), and age (β=0.07, SE=0.001, p=0.039), explaining 36% of variance; with equated items, age did not reach significance (p=0.057; BF1=2.61). Metacognitive efficiency (m-ratio): In the full set, no significant differences across partisanship (BF01=5.84 for main effect; BF01=4.57 for interaction), age (BF01=8.07), gender (BF01=6.05), or education (BF01=17.84; all ps>0.218). Using the equated set, there was a main effect of partisanship, F(1,485)=4.26, p=0.039, ηp²=0.01 (Republicans slightly higher m-ratio than Democrats), and a main effect of partisanship strength, F(1,485)=5.31, p=0.022, ηp²=0.01, with no interaction, F(1,485)=0.09, p=0.762, BF01=6.28. Group-level (hierarchical) meta-d′ estimation found no statistically significant differences between parties. Response bias (c), equated set: Main effect of partisanship, F(1,485)=5.17, p=0.023, ηp²=0.01; Democrats showed a slightly greater false bias than Republicans. No significant differences by age (F(3,487)=0.61, p=0.610; BF01=55.52; continuous r≈0.03, p=0.459; BF01=13.93), education (p=0.266; BF01=6.12), or gender (t(488)=0.36, p=0.720; BF01=9.36). Political favorability (equated set): Discernment d′ showed a main effect of partisanship, F(1,458)=68.99, p<0.001, ηp²=0.13 (Democrats > Republicans), and a partisanship × item-type interaction, F(1,458)=5.12, p=0.024, with Democrats especially outperforming Republicans on pro-Republican items. Metacognitive efficiency showed a main effect of partisanship, F(1,458)=5.36, p=0.021 (Republicans higher), but no significant interaction, F(1,458)=3.28, p=0.071; BF01=1.95; hierarchical estimation again found no robust differences. Response bias showed a strong main effect of item type, F(1,458)=359.04, p<0.001, ηp²=0.44; participants exhibited more false bias on pro-Democrat items than on pro-Republican items (t(459)=17.7, p<0.001, d=0.77). The partisanship × item-type interaction was significant, F(1,458)=52.68, p<0.001, ηp²=0.10: for pro-Democrat items, both parties showed false bias with no significant difference (t(451)=1.76, p=0.079; BF01=2.16); for pro-Republican items, Republicans showed a true bias while Democrats showed little-to-no bias (t(424)=4.57, p<0.001, d=0.43). Accuracy quartiles vs metacognition: Quartiles by d′ showed a main effect on m-ratio, F(1,471)=24.53, p<0.001, ηp²=0.05; the least accurate quartile had the highest metacognitive efficiency relative to the most accurate quartile, t(203)=4.60, p<0.001, d=0.60. Even–odd split robustness analyses did not replicate this as a stable effect (BF01=7.53 and 4.82 for respective tests).
Discussion
The findings show that while Democrats, older adults, men, and more educated individuals generally demonstrate better discernment of true versus false headlines, metacognitive efficiency is high and largely comparable across demographic groups. This indicates that demographic disparities in misinformation endorsement are unlikely to be driven by broad differences in metacognitive insight; rather, they more plausibly reflect differences in discernment ability and, in some cases, response bias. Republicans’ lower discernment did not coincide with overconfidence; indeed, when using politically balanced items, Republicans slightly outperformed Democrats on metacognitive efficiency, and hierarchical analyses indicated minimal robust partisan differences. The political-favorability analysis suggests partisan response tendencies contribute to performance: Republicans exhibited a true bias on pro-Republican items and a false bias on pro-Democrat items, whereas Democrats showed a false bias on pro-Democrat items and little-to-no bias on pro-Republican items. Age-related differences in discernment were small and may be paradigm-specific; notably, older adults’ metacognitive efficiency was not impaired, casting doubt on the notion that older adults’ greater online misinformation sharing arises from a lack of metacognitive insight. The quartile analysis does not support a classic Dunning–Kruger pattern for this task; poorer performers did not show the poorest metacognitive efficiency, especially when considering robustness checks. Overall, applying SDT and meta-d′ methodologies provides clearer separation of discernment, metacognitive efficiency, and response bias, yielding nuanced insights into how and why different groups evaluate misinformation as true or false.
Conclusion
This study demonstrates that people, across demographics, generally possess good metacognitive insight into their ability to discern true from false news headlines. Despite Democrats’ overall superior discernment relative to Republicans, metacognitive efficiency was similar across groups and even slightly higher for Republicans when using politically equated items. Differences in misinformation endorsement appear to be driven more by discernment ability and response bias than by metacognitive deficits. Contributions include large-trial SDT-based measurement of discernment, response bias, and metacognition; balanced stimulus checks for political favorability; and preregistered tests of demographic differences. Future research should: (i) replicate small demographic effects (e.g., age differences) and explore mechanisms (e.g., emotion, information diets); (ii) compare local (trial-level) vs global (task-level) metacognitive judgments; (iii) use stimulus proportions that more closely mirror real-world information environments; (iv) include independents/non-partisans; and (v) test whether metacognitive-awareness interventions can improve sharing decisions or reduce engagement with misinformation.
Limitations
Key limitations include: (1) the main 140-item set was imbalanced in political favorability (more false items favorable to Republicans and more true items favorable to Democrats), which, while ecologically reflective, may influence response bias; analyses were repeated with a politically equated subset to address this. (2) The task presented equal numbers of true and false headlines, unlike real-world environments where true content dominates, potentially limiting ecological validity. (3) Older adults recruited online may not fully represent the general older population, though differences between online and offline samples may be minimal. (4) Political independents and non-partisans were not recruited, limiting generalizability to the broader U.S. population. (5) Some effects (e.g., age differences, quartile effects on metacognition) were small and not consistently robust across sensitivity analyses, warranting replication.
Related Publications
Explore these studies to deepen your understanding of the subject.