Medicine and Health
Social Media and Youth Mental Health: Scoping Review of Platform and Policy Recommendations
J. Chhabra, V. Pilkington, et al.
Public concern over soaring social media use and youth mental ill‑health has outpaced evidence — and solutions. This scoping review synthesizes recommendations for governments and platforms to protect 12–25‑year‑olds, grouping guidance into five themes: legislating accountability; transparency; collaboration; safety by design; and restricting access. Research conducted by Jasleen Chhabra, Vita Pilkington, Ruben Benakovic, Michael James Wilson, Louise La Sala, and Zac Seidler.
~3 min • Beginner • English
Introduction
The impact of social media on young people’s mental health is a pressing concern across Western nations where the vast majority of adolescents and young adults (12-25 years) use social media daily. Young people report social and personal benefits, including maintaining relationships, expanding networks, identity exploration, and access to information—especially for marginalized groups such as LGBTQ+ youth. At the same time, exposure to harmful content (eg, self-harm, suicidality, disordered eating, cyberbullying), predatory behavior, misinformation/disinformation, and extremism/hate speech has been associated with adverse mental health outcomes. Platform design features—such as recommender systems, endless scrolling, and algorithmically curated feeds—can amplify exposure to harmful content and encourage compulsive use linked to sleep disturbance, depression, and anxiety. Regulators (eg, eSafety Commissioner in Australia, Ofcom in the UK, FTC/FCC in the US, CRTC in Canada) face an “arms race” with rapidly evolving technologies and opaque platform practices. Amid growing calls for stricter regulation, some jurisdictions are pursuing access restrictions for minors. In this context, the review aims to synthesize recommendations directed to governments, regulators, and social media companies on how platforms can be designed and regulated to safeguard young people’s mental health and minimize harm.
Literature Review
The paper’s background synthesizes existing literature on both benefits and harms of social media for young people. Benefits include social connection, belonging, identity exploration, and supportive communities, particularly for marginalized youth. Harms documented in prior research include associations between exposure to self-harm/suicide-related content and increased depression/anxiety and links to suicide attempts; disordered eating and body image concerns (especially among adolescent girls); cyberbullying and its association with depression, anxiety, and suicidal ideation; predatory interactions and child sexual abuse material with severe psychological impacts; and the spread of misinformation/disinformation and extremist content affecting mental health and social attitudes. Literature also highlights design-driven risks (recommender systems, endless scrolling, algorithmically personalized feeds) that can amplify harmful content and compulsive use. Prior policy contexts and regulatory efforts are summarized, noting gaps due to rapid technological change and inconsistent legislation.
Methodology
The review followed PRISMA-ScR guidelines. Search strategy, developed with a university librarian, prioritized gray literature for practical policy-oriented recommendations. Databases: Overton and Google (primary), with PubMed, Scopus, and PsycINFO rerun to ensure comprehensive coverage; peer-reviewed literature was eligible but yielded few relevant documents. Search terms: “Social Media” AND “Young People” AND “Mental Health” AND “Recommendations” OR “Guidelines.” Inclusion criteria: (1) focus on young people aged 12-25; (2) recommendations for social media companies, governments, or regulators to promote safe social media products/services; (3) peer-reviewed articles, case studies, or gray literature (policy briefs, reports); (4) published in Australia, Canada, the UK, the US, or global organizations; (5) between Jan 2020 and Sep 2024; (6) English language. Screening: Results imported into Covidence; two authors independently screened titles/abstracts (disagreement <2%; Gwet AC1=0.97; Cohen k=0.54); 25% of full-texts dual screened with 0% disagreement, then single-author screening checked by coauthors. Data extraction: title, authors, organization, year, country, document type, target outcomes/audience, and recommendations for companies/regulators. Synthesis: documents exported to NVivo (v14) for inductive content analysis; preliminary themes and subthemes developed by first/second authors and refined collaboratively with coauthors.
Key Findings
- Search and selection: 6966 records identified; 1991 duplicates removed; 4980 screened; 4860 excluded; 120 full-text assessed; 70 included.
- Recommendation targets among included reports (n=70): 24% (17/70) to regulators/governments; 20% (14/70) to social media companies; 56% (39/70) to both.
- Targeted outcomes and exposures: 50% (35/70) mental health and well-being; 10% (7/70) self-harm/suicidality; 23% (16/70) misinformation/disinformation; 9% (6/70) discrimination/hate crime/hate speech; 13% (9/70) extremist content; 7% (5/70) misogynistic content; 4% (3/70) cyberbullying; 11% (8/70) body image/disordered eating; 13% (9/70) pornography/child sexual abuse/exploitation. Geographic distribution: Australia (31%, 22/70), UK (26%, 18/70), US (24%, 17/70).
- Five interrelated themes of recommendations:
1) Legislating and overseeing accountability (n=39): create/reform legislation (eg, platform liability; Section 230 reforms), expand definitions of online harms (eg, body image, misinformation, extremism), and empower/equip regulators (independent audits, data access, penalties).
2) Transparency (n=25): advertising and algorithmic transparency (ad repositories; clearer disclosure of targeting/political ads; influencer disclosures) and prescriptive transparency reports (metrics on detection/removal, automated review accuracy, violation types, response times, take-down orders, settlements).
3) Collaboration (n=50): cross-sector collaboration among platforms, governments, researchers, health practitioners, and youth; data-sharing agreements; inclusion of vulnerable and diverse youth voices; cross-platform consistency and independently auditable joint efforts.
4) Safety by design (n=34): privacy (data minimization; default-max privacy; limiting profiling/personalized ads), content moderation (investment in human/AI moderation across languages; updating filters to prevent evasion), and autonomy (opt-outs; user controls to suppress harmful content; disable infinite scroll; ban dark patterns; informed consent for data use).
5) Restricting young people’s access (n=14): age verification mandates (benefits and drawbacks), privacy and feasibility concerns; explore alternatives (digital ID, caregiver involvement, facial recognition, OS-level child flags) and emphasize multipronged solutions beyond age gates.
Discussion
The review addresses the need to synthesize actionable recommendations for governments, regulators, and platforms to safeguard youth mental health amid rapid platform changes and opaque practices. Findings underscore that existing moderation and self-regulation are insufficient, and that legislative accountability and empowered regulators are essential to deter and remediate harms. Safety-by-design approaches targeting privacy, moderation quality, and user autonomy can mitigate design-driven risks (eg, recommender systems, infinite scroll) and reduce exposure to harmful content while preserving beneficial aspects of social media (connection, identity, support). Transparency—via ad/algorithm disclosures and robust, prescriptive reporting—enables oversight, comparability, and public trust, but requires standardized content and independent verification through researcher data access. Collaboration across stakeholders, with youth centrally involved, is critical to ensure relevance, equity, and responsiveness to evolving risks (eg, evasion tactics, AI-generated content). Debates on age restrictions highlight trade-offs: while potentially limiting exposure, they pose privacy risks and may undermine access to supportive online spaces; thus, nuanced, multipronged policies are favored over blunt bans. Overall, the synthesis provides a timely, expert-informed roadmap aligned with emerging empirical evidence and calls for rigorous evaluation of proposed measures.
Conclusion
This scoping review consolidates expert-based recommendations into five interrelated themes that collectively offer a multipronged strategy for governments, regulators, and social media companies to protect young people’s mental health. The review contributes a practical blueprint for legislative accountability, transparency, collaboration, and safety-by-design, while critically appraising age-restriction approaches. It emphasizes coordinated, evidence-informed action and the importance of involving young people and researchers. Future directions include rigorous, peer-reviewed evaluations of proposed policies and design changes, standardized transparency reporting, enhanced researcher data access under privacy-protective frameworks, and international comparative work to address global consistency and applicability.
Limitations
- Language and scope: Predominantly English-language documents from selected Western contexts, limiting generalizability to non-English-speaking and low- to middle-income countries.
- Age granularity: Recommendations were not differentiated by developmental stages within 12-25 years due to limited age-specific guidance.
- Evidence base: Heavy reliance on gray literature; recommendations were seldom informed by rigorous peer-reviewed evaluations, precluding assessment of efficacy and real-world impacts.
- Focus: Intentional emphasis on industry/regulator actions; recommendations for young people, clinicians, caregivers, or educators were outside scope.
- Quality appraisal: Gray literature dominance limited formal quality assessment; calls for future peer-reviewed studies to evaluate proposed interventions.
Related Publications
Explore these studies to deepen your understanding of the subject.

