logo
ResearchBunny Logo
What Is in There for Artificial Intelligence to Support Mental Health Care for Persons with Serious Mental Illness? Opportunities and Challenges

Medicine and Health

What Is in There for Artificial Intelligence to Support Mental Health Care for Persons with Serious Mental Illness? Opportunities and Challenges

B. Wang, C. K. Grønvik, et al.

Artificial intelligence (AI) presents exciting possibilities for enhancing mental health services, according to research conducted by Bo WANG, Cecilie Katrine GRØNVIK, Karen FORTUNA, Trude EINES, Ingunn MUNDAL, and Marianne STORM. While it can optimize service delivery and support recovery flexibility, there's a cautionary note about AI's potential to increase isolation instead of providing emotional support. Join us in exploring this complex landscape of AI in mental health!

00:00
00:00
~3 min • Beginner • English
Introduction
The paper examines stakeholder perceptions of using AI to support mental health care for persons with serious mental illness (SMI: schizoaffective disorders, bipolar disorders, and major depressive disorders). While AI can process large datasets and identify complex patterns and has transformed several somatic specialties, integrating AI in mental health is uniquely challenging due to the subjective and nuanced nature of conditions, historical mistrust from non–trauma-informed systems, and societal stigma. Despite potential benefits (access, early intervention, personalization), ethical and regulatory issues remain. Evidence specific to SMI is scarce and poorly understood. Given the growing burden of mental conditions and resource shortages, the study aims to identify opportunities and pitfalls by exploring perceptions among diverse mental health stakeholders in Norway.
Literature Review
Methodology
Design: Qualitative individual interviews with multiple stakeholder groups (government, hospital, municipality, university/research institution, health industry/cluster, and user organization) to capture broad perspectives on AI for SMI. Sampling and recruitment: Purposive sampling based on experience with digital health in mental health. Participation voluntary; written informed consent obtained; right to withdraw at any time. Ethics/oversight: Data are from the corresponding author’s PhD study (assessed by the Norwegian Agency for Shared Services in Education and Research, SIKT; reference no. 269350). Data not previously published. Data collection: Interviews conducted via Microsoft Teams during autumn 2024. Audio recordings were transcribed verbatim. Analysis: Thematic analysis following six steps (familiarization; generating initial codes; generating themes; reviewing themes; defining/naming themes; writing up) using NVivo 14 and Microsoft Excel. Sentiment analysis performed with NVivo 14 Autocode Wizard to classify sentiment (positive, moderately positive, moderately negative, negative) at the word/phrase level based on predefined criteria. Note: Autocode Wizard analyzes sentiment of individual words and does not classify content holistically or rate on a Likert scale.
Key Findings
Participants: 22 informants. Demographics: 12/22 (55%) male; 18/22 (82%) aged 40–59; 13/22 (59%) with healthcare backgrounds (psychiatry, psychology, nursing, social education). Organizational representation: government 2/22 (9%), hospitals 6/22 (27%), municipalities 6/22 (27%), universities/research institutions 6/22 (27%), health industries/clusters 3/22 (14%), user organizations 1/22 (5%). Sentiment analysis: Positive 0; Moderately positive 5 (25%); Moderately negative 15 (50%); Negative 5 (25%). Three-quarters of responses were moderately negative or negative. Themes and illustrative insights: - When AI meets SMI: • Potential to disrupt negative patterns and support self-understanding (e.g., AI chatbots recognizing reaction patterns, aiding goal-setting and stepwise recovery). • Risks of exacerbating isolation; AI’s human-like responses may be misinterpreted as empathy; risk of harmful guidance if prompts are framed deceptively; older adults may be especially vulnerable and skeptical. - Human-centered AI for humanity: • Need for personal adaptation (age, functional levels, sensory or reading impairments) and accessibility to mitigate digital divide; tailor to linguistic/cultural contexts; concern about dataset quality, representativeness, and bias, especially for underrepresented vulnerable groups and less-spoken languages. • Maintain human touch: Trusting relationships with clinicians remain central; AI cannot replace but may complement human care. - AI to improve service delivery for SMI: • Enhance clinical decision support: AI-generated alerts for symptom thresholds; monitoring while awaiting care; integrate multimodal vital signs (e.g., heart rate, breathing, movement) for context-aware assessment. • Better resource management: Use AI for triage and remote follow-ups for less urgent cases to reserve scarce hospital resources (e.g., beds) for the most seriously ill. - Building AI competence: • Upskill AI literacy among users, professionals, and leaders; encourage active, responsible use. Recognize limited current evidence in mental health/psychiatry compared to somatic fields, fueling skepticism; need further research.
Discussion
Findings address the research question by outlining both perceived opportunities (decision support, monitoring, triage, personalization) and challenges (risk of isolation, misinterpretation of AI responses as empathy, bias, data quality, cultural/linguistic fit, digital divide). Although sentiment analysis skewed moderately negative/negative, qualitative themes highlighted pragmatic optimism about targeted applications that complement human care. This discrepancy may reflect heterogeneous backgrounds and experiences among informants and limitations of word-level sentiment coding. Significance: Integrating AI into SMI care requires safeguarding human relationships, ensuring safety and reliability, and adapting tools to personal, cultural, and linguistic contexts. Ethical and regulatory frameworks are critical, including transparency and explainability (e.g., alignment with EU AI Act and MDR) to promote trust, accountability, and responsible resource allocation. Addressing data privacy, bias, and accessibility is essential to avoid widening disparities. Thoughtful use can augment—not replace—empathy and clinical judgment and may improve service efficiency and accuracy when embedded in human-centered workflows.
Conclusion
The study contributes stakeholder-derived insights on the opportunities and challenges of AI integration in serious mental illness care. AI may enhance clinical decision support, monitoring, and resource allocation, and support individualized recovery, but cannot replace human connection and professional judgment. Responsible development should prioritize safety, transparency, accessibility, cultural/linguistic adaptation, and human oversight. Future research should: generate clinical evidence for high-risk SMI contexts; develop explainable, bias-aware models; co-design solutions with users and peer support workers; evaluate universal design for accessibility; and implement training to build AI literacy among all stakeholders.
Limitations
- Sentiment analysis relied on NVivo Autocode Wizard, which classifies sentiment at the word/phrase level and does not holistically interpret content or provide Likert-scale ratings, potentially contributing to discrepancies with qualitative themes. - Qualitative study with purposive sampling of 22 Norwegian stakeholders; findings reflect this context and are not statistically generalizable. - Affiliation and sectoral representation were diverse but uneven (e.g., limited user organization representation), which may influence perspectives captured.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny