logo
ResearchBunny Logo
The impact of interventions against science disinformation in high school students

Education

The impact of interventions against science disinformation in high school students

C. Martini, M. Floris, et al.

This classroom-based study tested three interventions—Civic Online Reasoning (COR), Cognitive Biases (CB), and Inoculation (INOC)—to help high school students spot science disinformation. Conducted with 2,288 students using Instagram-style posts, the research found limited improvement and trade-offs: COR promoted lateral reading while INOC increased skepticism. Research conducted by Carlo Martini, Mara Floris, Piero Ronzani, Luca Ausili, Giulio Pennacchioni, Giorgia Adorno, and Folco Panizza.... show more
Introduction

The study addresses how to improve high school students’ ability to discern scientifically valid information from science disinformation, a pressing issue given teenagers’ extensive social media use and documented difficulties in evaluating online information. Building on evidence-based approaches—Civic Online Reasoning (COR) for source evaluation, Cognitive Biases (CB) education to counter faulty heuristics, and Inoculation (INOC) against manipulative rhetorical tactics—the research examines whether these interventions can be scaled from controlled online experiments to ecological classroom settings. The authors hypothesize: H1, that any of the three interventions (COR, CB, INOC) will improve accuracy compared to a control group; and H1bis, that any gains will persist at one- or four-week follow-up. The work focuses specifically on science information/disinformation (e.g., health, medicine, climate) where pseudoscience often mimics scientific mannerisms to advocate anti-science, potentially undermining public trust and policy. The purpose is to test scalable educational strategies in realistic conditions (mobile platform, real-world stimuli, classroom delivery) and to explore predictors such as trust in science and conspiracy beliefs.

Literature Review

The paper situates its work within several strands: (1) Civic Online Reasoning (COR) literature showing that lateral reading and click restraint improve evaluation of online sources and increase digital literacy across high school and college settings. (2) Sourcing interventions reviewed by Brante and Strømsø, which show mixed outcomes in primary school but better results in lower secondary levels for identifying claim validity and author credentials. (3) Cognitive biases research highlighting how heuristics like confirmation bias and the illusion of truth contribute to susceptibility to misinformation, with debiasing and explicit instruction sometimes improving critical thinking. (4) Inoculation theory research demonstrating that preemptive exposure to weakened misinformation techniques with refutation can build lasting resistance across domains (political, health, extremist content). The authors note few studies target scientific disinformation specifically among youth and emphasize the challenge of defining and operationalizing scientific validity versus pseudoscience.

Methodology

Design: A classroom-based, between-subjects field experiment with follow-up, pre-registered on OSF (osf.io/qkpb5). Classes were randomly assigned to COR, CB, INOC treatments, or Control. All groups received two lectures; treatments watched a pre-recorded intervention video before the first test, while Control took the test first then viewed one of the videos (to comply with school requests). Follow-up occurred one or four weeks later with a second test using different posts, then a second video or customized lesson and debrief. Recruitment and Setting: Emails to schools in Milan and Turin (and some other provinces) recruited 19 institutions and 100 classes (grades 1–5). Most students consented; parental consent was sought. Data collection occurred in the latter half of the 2022/2023 school year. Interventions: • COR (≈20:45) demonstrated live lateral reading and click restraint using a browser to evaluate a real news item (polar bears), emphasizing fact-checkers’ strategies for sourcing and cross-verification. • INOC (≈17:53) taught recognition of five misleading techniques: emotive content, conspiracy, impersonification, discredit, trolling; framed as inoculation against deceptive tactics via pre-exposure and refutation. • CB (≈18:34) explained key cognitive biases with examples and interactive prompts; aimed to increase awareness to mitigate bias impact on online evaluation. Procedure: Each classroom session began with a standardized 10-minute introduction to misinformation concepts, followed by the assigned video and then an online accuracy test delivered via QR code (Qualtrics; approx. 10–15 minutes). The second lecture started with the follow-up test, then a second video or customized content, plus debrief of posts. Stimuli: Sixteen Instagram-like posts (snapshots) curated from the web: 8 scientifically valid (backed by published evidence), 8 invalid (pseudoscience or false/misleading per professional fact-checks). Posts were evaluated independently by two researchers using stringent criteria (authorship, pertinence, consensus, publisher) per Martini & Andreoletti; valid posts satisfied all criteria; invalid failed at least three. Students rated three randomly selected posts per test; follow-up presented different posts. Sources were intentionally unfamiliar (mean familiarity 11%, SD 17%). Measures: • Accuracy: Students rated scientific validity from 1 (“completely invalid”) to 6 (“completely valid”). Ratings were scored against ground truth: for valid posts, higher ratings scored higher accuracy; for invalid posts, lower ratings scored higher accuracy. Each test yielded three post-level scores; total six per student across two lectures. • Exploratory variables: confidence (1–10), average weekly sharing, source familiarity (yes/no), source trustworthiness (5-point Likert), external search (left test page yes/no; where), trust in scientists (6-point Likert), conspiracy belief (composite 5-point Likert), phone usage time (self and actual screen time, 0–24 hours sliders). • Attention checks for treatment groups assessed recall of video content. Participants: First lecture N=2,288 (control=663; COR=569; INOC=521; CB=535); follow-up matched N=1,525 (control=413; COR=396; INOC=352; CB=364). Demographics for main analysis: 1,111 female, 886 male, 291 other/unspecified; mean/median/mode age 16 (range 13–25). Reported average phone use 5–6 hours/day; top platforms: Instagram 83%, WhatsApp 79%, TikTok 68%, BeReal 27%, Snapchat 13%, X 8%, Facebook 3%. Baseline control ratings: invalid posts correctly rated invalid 77% (range 60–98%); valid posts correctly rated valid only 32% (range 15–58%), indicating strong skepticism. Analyses: Primary outcome modeled with cumulative link mixed-effects logistic regression (R ‘ordinal’ clmm): accuracy score as ordinal DV; treatment as fixed effect; random intercepts for post and nested student, class, school. Contrasts used ‘emmeans’ trt.vs.ctrl. If significant, follow-up models would compare treatment follow-up to control baseline, adding lecture delay covariate. Robustness checks included random intercept for experimenter and exclusion of attention-check failures. Multiple comparisons controlled via Benjamini–Hochberg.

Key Findings

• No significant average treatment effect on discernment: None of the interventions (COR, CB, INOC) significantly increased accuracy versus Control after lecture 1 (Table 1: CB β=−0.0196, z=−0.322, p=0.747; COR β=0.0279, z=0.465, p=0.747; INOC β=0.0283, z=0.460, p=0.747). Results robust to binary accuracy metric and exclusion of attention-check failures. • Baseline skepticism: Students tended to rate scientifically valid posts as invalid. In Control, invalid posts were correctly flagged invalid 77%, but valid posts were rated valid only 32%. • COR strategies adoption and indirect effects: Self-reported use of lateral reading increased mean accuracy (β=0.215, t(1217)=3.502, p<.001); click restraint increased accuracy (β=0.177, t(1217)=2.396, p=.017). Mediation analysis showed COR indirectly improved accuracy via increased adoption (Average Causal Mediation Effect: lateral reading 5.2% [1.7%,10%], p<.001; click restraint 4% [0.8%,8%], p=.018). Total effects remained non-significant (p>.054), suggesting limited uptake. • Inoculation-induced skepticism: Interaction between INOC and post validity was significant. For invalid posts, accuracy increased vs Control (β=0.250 [0.050,0.449], z=3.000, Pcorrected=0.008). For valid posts, accuracy decreased vs Control (β=−0.239 [−0.458,−0.019], z=2.602, Pcorrected=0.028). Overall scientific validity ratings were lower in INOC than Control (β=−0.279 [−0.485,−0.074], z=3.257, Pcorrected=0.003), indicating heightened generalized skepticism. • Group size effect for COR: Smaller classes enhanced COR effectiveness. Interaction showed increasing accuracy as group size decreased (β=−0.015, z=−2.971, Pcorrected=0.003). Average accuracy: groups <25 students 4.07 [3.93, 4.22] vs ≥25 students 3.80 [3.71, 3.90], a 5.4% difference. No association with attention checks or strategy adoption. • Follow-up: No lasting effects at 1–4 week follow-up; exploratory effects were no longer significant (all p>.734).

Discussion

The interventions did not produce significant average improvements in recognizing scientifically valid versus invalid content, challenging the assumption that proven online approaches scale directly to classrooms. The ecological classroom setting—mobile platforms, real-world stimuli, distractions, varying engagement—likely diluted effects compared to controlled experiments. Nonetheless, the pathway analyses suggest that COR can improve discernment indirectly when students adopt lateral reading and click restraint, supporting prior findings on fact-checkers’ strategies. Conversely, inoculation increased general skepticism, improving rejection of invalid content but also undermining trust in valid science posts, highlighting a trade-off between resistance to manipulation and over-criticism of legitimate information. These findings indicate that effective digital literacy interventions must be adapted to the classroom context, potentially favoring smaller groups and interactive, practice-based formats (“learn by doing”) over passive videos to enhance engagement and technique uptake. The strong baseline skepticism and unfamiliar sources suggest students may rely on source familiarity cues; when absent, they default to invalidating content, which has implications for balancing critical scrutiny with trust calibration. Overall, the results underscore scalability challenges and the need to tailor interventions to real-world educational environments while carefully managing unintended consequences like over-skepticism.

Conclusion

The study tested three interventions—Civic Online Reasoning, Cognitive Biases education, and Inoculation—within an ecological classroom setting using mobile delivery and real-world Instagram posts. None yielded significant average improvements in discerning scientific information from disinformation. COR showed small, indirect benefits via increased adoption of lateral reading and click restraint, while INOC increased generalized skepticism that reduced trust in valid content. The work highlights that interventions effective in controlled environments may not scale straightforwardly to classrooms. Future research should develop more interactive, engaging, and contextually adapted approaches, consider class size, incorporate students’ actual information feeds, and measure scientific literacy baselines to better tailor content. These lessons inform the design of scalable educational strategies to enhance digital critical thinking and literacy among youth.

Limitations

• Ecological classroom constraints: Potential spillovers (students copying), limited monitoring of individual behavior, low performance incentives, and distractions; 23.3% failed at least one attention check; 10% failed initial recall; 15.8% failed at least one additional attention check. • Passive video format: Standardized, brief videos likely reduced engagement versus interactive practice, although online passive interventions have worked elsewhere. • Source unfamiliarity and skepticism: Intentional selection of unfamiliar sources may have amplified skepticism; valid posts were often rated as invalid (66–72% across treatments), indicating trust calibration issues. • Scientific domain difficulty: Low scientific literacy may have limited intervention effectiveness compared to domains like political/social misinformation; baseline scientific reasoning was not measured. • Mobile platform constraints: Conducting tasks on phones may hinder lateral reading and complex sourcing compared to computer-based setups. • Stimulus representativeness: Curated posts may not fully match students’ natural feeds; future work should sample directly from students’ news diet. • Geographical scope: Concentrated in Northern Italy (Milan, Turin); broader, multi-site replications are needed. • Attrition and demographic imbalances: Differences in gender proportions, age, completion and attrition rates across treatments were noted but did not alter main results after covariate checks.

Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny