logo
ResearchBunny Logo
Social-Media-Based Mental Health Interventions: Meta-Analysis of Randomized Controlled Trials

Psychology

Social-Media-Based Mental Health Interventions: Meta-Analysis of Randomized Controlled Trials

Q. Zhang, Z. Huang, et al.

A preregistered meta-analysis of 17 randomized controlled trials (N=5,624) finds that social-media–based mental health programs modestly reduce anxiety, depression, and stress, with stronger effects when interventions are human-guided, social-oriented, and female-majority. Learn how accessible, low-training social apps could play a role in treating symptoms while weighing their limits and promise. This research was conducted by Authors present in <Authors> tag: Qiyang Zhang, Zixuan Huang, Yuan Sui, Fu-Hung Lin, Hongjie Guan, Li Li, Ke Wang, Amanda Neitzel.... show more
Introduction

Mental disorders affect more than 1 in 8 people worldwide, with anxiety and depression most common, while over 70% receive no treatment due to stigma, limited providers, and workforce strain. Social-media-based mental health interventions may address access and cost barriers by delivering scalable, low-cost programs that provide psychoeducation, peer support, and therapeutic interactions within familiar platforms. Despite many reviews on digital and online interventions, few have examined social-media-based interventions for mental health, and none have focused on rigorously designed RCTs in general populations. This meta-analysis addresses this gap by synthesizing rigorous social-media-based RCTs to assess overall effectiveness on negative mental health outcomes and to examine moderators. Research questions: (RQ1) What are the overall impacts of social-media-based RCTs on reducing negative mental health outcomes versus CAU or waitlist? H1: Social-media-based RCTs effectively alleviate negative mental health outcomes. (RQ2) To what extent do outcomes differ by recruitment type (clinical/nonclinical), age, control type (waitlist/CAU), delivery (self-guided/human-guided), duration, program orientation (social/task), and sex? H2: Larger effects are expected for clinical populations, younger groups, passive controls, human-guided, social-oriented programs, more women, and longer duration.

Literature Review

Prior meta-analyses extensively evaluated online, digital, eHealth, computer-based, and internet mental health interventions, but social-media-focused meta-analyses are scarce and largely limited to specific populations (eg, patients with cancer) or youth-focused scoping/systematic reviews. No previous meta-analysis targeted rigorously designed social-media-based mental health interventions for general populations. Given the growth in social media use and the need for scalable, cost-effective mental health solutions, a focused review of high-quality RCTs is warranted to provide robust causal evidence on effectiveness and moderators.

Methodology

Registration: The review was preregistered on OSF. Deviations from protocol included expanding population to all ages (to analyze age as a moderator), narrowing to social-media-based interventions only (to reduce heterogeneity from broader digital modalities), and focusing solely on negative mental health outcomes (to avoid heterogeneity between enhancement of positive outcomes vs reduction of negative symptoms).

Search strategy: Comprehensive searches (completed by April 2025) in seven databases (ERIC, PsycINFO, Scopus, PsycArticles, Communication and Mass Media Complete, PubMed, ProQuest), targeted hand-searching via Paperfetcher across field-relevant journals, and forward/backward citation chasing using CitationChaser. A total of 11,658 records were identified and managed in Covidence.

Eligibility criteria (PICO-guided) included: (1) RCTs only; (2) ≥30 participants per condition at baseline; (3) interventions delivered largely via social media platforms (eg, Facebook, Instagram, WhatsApp, WeChat; excluding abstinence interventions); (4) baseline equivalence between conditions (<0.25 SD difference on mental health measures per WWC standards); (5) differential attrition <15% between treatment and control (WWC standards); (6) delivered by nonresearchers (for real-world feasibility); (7) quantitative measures of negative mental health outcomes (eg, depression, anxiety, stress, psychological distress), with data enabling Hedges g computation; (8) full-text available in English; (9) published ≥2005; (10) primary studies (no secondary analyses); (11) exclude one-item outcome measures; (12) exclude single-session interventions.

Screening and coding: Title/abstract and full-text screening were double-blinded with at least two reviewers; conflicts resolved by consensus. Double coding of eligible studies using Google Spreadsheets; procedures, coding spreadsheets, and R code shared on GitHub. Risk of bias assessed using the JBI Critical Appraisal Checklist for RCTs, focusing on criteria 2, 4, 5, 6, and 8 (others met by inclusion standards); two independent coders with third reviewer for discrepancies.

Analytical plan: Effect sizes computed as Hedges g (standardized mean differences) using R's metafor (escalc), weighting by inverse variance with Hedges adjustments. Random-effects meta-regression was used to estimate overall and moderator effects, given expected heterogeneity. Moderators (grand mean-centered) included recruitment type (clinical vs nonclinical), age group (adolescents <20; early adulthood 20–<40; middle adulthood 40–<60; late adulthood ≥60), control type (waitlist vs CAU/active), delivery (self-guided vs human-guided), duration (weeks), sex composition (≥70% female vs <70%), and program orientation (social-oriented vs task-oriented). Publication bias assessed via weight-function selection models (weightr) rather than funnel plots/Egger’s. Sensitivity analyses included varying the majority-female threshold (70% vs 50%). All materials and data available on GitHub.

Key Findings
  • Included studies: 17 RCTs, 22 distinct programs, total N=5624; 73 effect sizes (ESs): depression (n=31; 42.5%), anxiety (n=27; 37.0%), stress (n=12; 16.4%), negative affect (n=2; 2.7%), psychological distress (n=1; 1.4%).
  • Overall effect: Random-effects model ES=0.32 (95% CI 0.24–0.45), P<.001; prediction interval approx. −0.38 to 1.08; substantial heterogeneity I²=88.1%, τ²=0.13; partial I²: 28.87% between studies, 59.23% within-cluster.
  • Outcome subgroups: Depression ES=0.31 (P<.001, n=31); Anxiety ES=0.33 (P=.04, n=27); Stress ES=0.69 (P=.02, n=12). All indicate beneficial effects.
  • Moderator effects: • Sex: Samples with ≥70% female had larger effects (β=1.40, P=.01; marginal means: ≥70% female ES=1.81, P<.01 vs <70% ES=0.41, P<.05). Robust to sensitivity using 50% threshold. • Delivery: Human-guided > self-guided (β=−0.72 for self-guided vs human-guided, P=.02; marginal means: human-guided ES=1.35, P<.01; self-guided ES=0.63, P<.05). • Program focus: Social-oriented > task-oriented (β=−0.76 for task vs social, P=.03; marginal means: social ES=1.20, P<.01; task ES=0.44, P<.05). • Control type: CAU/active > waitlist (β=−0.49 for waitlist vs CAU, P=.02; marginal means: CAU ES=1.37, P<.01; waitlist ES=0.88, P<.01). • Age: No statistically significant moderation; adolescents vs middle adulthood approached significance (P=.06); late adulthood had the largest marginal mean ES but based on limited data. • Clinical vs nonclinical: Not significant (β=0.34, P=.17). • Duration: Not significant (P=.26).
  • Platforms and high performers: WhatsApp most common and among highest ES programs (e.g., Hemdi & Daley; Hatamleh et al; Abedishargh et al). Other platforms included WeChat, YouTube, Facebook, Google Hangouts, Horyzons.
  • Publication bias: Weight-function selection modeling suggested upward adjustment of mean effects due to selective reporting (nonsignificant results less likely reported), but main conclusions remained.
  • Risk of bias: Overall low; mean JBI appraisal score 9.29/13; blinding and allocation concealment often unclear; follow-up completeness variably reported.
  • Temporal trend: Publications meeting rigorous criteria increased 2020–2024 (pandemic period) and slightly declined in 2023–2024, remaining above prepandemic levels.
Discussion

Findings support that rigorously designed social-media-based RCTs reduce negative mental health symptoms (depression, anxiety, stress), addressing RQ1 and confirming H1. Moderator analyses (RQ2) indicate stronger effects for programs that are human-guided, social-oriented, and conducted in samples with ≥70% females, and when controls are CAU/active rather than waitlist. Greater benefits among majority-female samples may reflect higher engagement with social support and help-seeking behaviors aligned with tend-and-befriend responses. Social-oriented designs may enhance therapeutic alliance and peer support, which are central to mental health improvement, potentially interacting with sex composition. Human guidance likely boosts adherence, motivation, and timely support, enhancing outcomes despite scalability trade-offs. The unexpected pattern of CAU/active controls outperforming waitlist may reflect waitlisted participants independently seeking alternative treatments in accessible postpandemic care environments, diluting contrasts. Age and duration did not significantly moderate effects, suggesting broad applicability across age groups and that program quality and design features may matter more than length. The overall heterogeneity underscores diverse interventions, populations, and measures; nonetheless, risk of bias appears low and sensitivity analyses support robustness. Publication bias analyses indicate some upward bias in reported effects, warranting cautious interpretation of magnitudes.

Conclusion

This meta-analysis provides high-quality evidence that social-media-based interventions can effectively reduce negative mental health outcomes across diverse populations. Programs that incorporate human guidance and social interaction, and those evaluated against CAU/active controls, tend to yield stronger effects, with particularly larger benefits in samples with a higher proportion of women. Given their scalability, accessibility, and cost-effectiveness, social-media-delivered interventions should be considered for integration into routine mental health services, especially where access to traditional care is limited. Future research should: (1) conduct more rigorous, adequately powered RCTs, including three-arm designs (eg, intervention vs CAU vs waitlist) to clarify control effects; (2) examine interactions among moderators (eg, sex, program orientation, personality, comfort with technology, and disorder type); (3) report and analyze race/ethnicity and equity-related factors; (4) evaluate mechanisms (eg, social support, engagement, therapeutic alliance) and implementation fidelity; and (5) explore optimal blends of human guidance with scalable delivery to balance effectiveness and reach.

Limitations
  • Limited number of included studies (17 RCTs; 22 programs) restricts statistical power and precision of moderator estimates (reflected in low degrees of freedom).
  • Small subsamples within certain moderator/outcome categories (eg, only 1 ES for psychological distress, 2 ESs for negative affect, and 1 late-adulthood study), requiring cautious interpretation of subgroup findings.
  • Substantial heterogeneity (I²=88.1%), with notable within-study variability, indicating diverse designs and measures.
  • Potential publication/selection bias suggested by weight-function modeling; average effects may be upwardly biased.
  • Incomplete reporting on some methodological aspects (eg, blinding, allocation concealment, follow-up completeness) and insufficient data on race/ethnicity across primary studies, limiting equity analyses.
  • Inclusion criteria (eg, n≥30 per arm, nonresearcher delivery, post-2005, exclusion of one-item and single-session measures) enhance rigor but may limit generalizability to other intervention formats.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny