logo
ResearchBunny Logo
NLP as a Lens for Causal Analysis and Perception Mining to Infer Mental Health on Social Media

Computer Science

NLP as a Lens for Causal Analysis and Perception Mining to Infer Mental Health on Social Media

M. Garg, C. Saxena, et al.

This innovative position paper delves into using Natural Language Processing for causal analysis and perception mining to assess mental health through social media interactions. The research, conducted by Muskan Garg, Chandni Saxena, Usman Naseem, Sohn Sunghwan, and Bonnie J Dorr, advocates for the development of explainable AI models to enhance mental health evaluation methods.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper addresses the need for explainable, clinically meaningful automation to detect and understand mental health states from social media, especially amid increased prevalence of anxiety and depression during the COVID-19 pandemic and limited access to mental health services. It motivates leveraging self-disclosures on social platforms for Mental Health Analysis (MHA) to support behavioral therapy and risk identification. The authors argue that current approaches often focus on classification using surface-level linguistic features (Level 0) and call for deeper, interpretable analyses (Level 1) that uncover reasons (cause-and-effect) and underlying perspectives (perceptions) reflected in user posts. They propose two complementary dimensions—causal analysis and perception mining—grounded in discourse and pragmatics to bridge computational techniques and clinical practice, ultimately aiding real-time, personalized conversational AI for mental health support. The scope is a position perspective informed by NLP researchers and a senior clinical psychologist, highlighting opportunities, challenges, and a vision for explainable AI in mental health.
Literature Review
The authors survey the state of community work on social-media-based MHA, noting numerous machine learning and deep learning studies for identification and prediction of mental health conditions across digital data, medical records, long clinical reports, social media text, and multimodal sources. Reviews highlight issues such as demographic bias, consent, and theoretical grounding in human behavior. Current dominant efforts (Level 0) rely on handcrafted or automated features for classification/prediction and adhere to ethical protocols given data sensitivity. Initial work on causal explanations in social media (e.g., Facebook data) indicates the promise of moving beyond correlation to causality to understand reasons behind mental illness. The paper positions Level 1 research to incorporate cause-and-effect and perception-and-affect through discourse and pragmatics, emphasizing explainability and clinical relevance.
Methodology
As a position paper, the methodology outlines a conceptual framework rather than empirical procedures. The authors define two core dimensions: - Causal Analysis: A cross-sectional approach to reveal reasons behind a user's mental state expressed in posts. It comprises three sub-tasks: (1) Cause Detection—binary determination of whether a post contains reasons behind the user's mental intent; (2) Causal Inference—extractive or abstractive identification of text spans explaining the reason; (3) Cause Categorization—assigning detected causes to topical categories (e.g., jobs/career, relationships, medication, bias/abuse, alienation). Psychological theories and clinical cues (e.g., insomnia, weight changes, feelings of worthlessness) inform potential cause categories. The framework encourages moving beyond surface features to discourse parsing, knowledge graphs (triples <event, object, relation> linking triggering events, aspects of wellbeing, and situational context), and discourse relation modeling to map cause-effect. Sentence simplification guided by semantic dependency (e.g., SISS) is proposed to manage long, complex self-reports without losing causal semantics. - Perception Mining: A longitudinal, discourse- and pragmatics-informed analysis to infer users' beliefs, morals, identities, and psychological perspectives that shape attitudes and behaviors over time. Drawing on self-perception theory, structural balance theory, and moral psychology, the approach distinguishes perception from personality and leverages datasets on moral sentiment, beliefs, and empathy to infer perception categories (e.g., FREEDOM, ATTRACTION, ASSET) from timelines. Pragmatics-inspired methods (empathetic conversations, commonsense knowledge infusion) are highlighted for modeling real-time supportive interactions. The paper proposes explainable representations that pair causes and perceptions with mental health outcomes, advocating a pipeline: cause detection → causal inference → cause categorization, supported by perception mining and discourse analysis for interpretable outputs.
Key Findings
This position paper advances the following key points: - Establishes NLP as a lens for deeper, explainable Mental Health Analysis by prioritizing cause-and-effect (causal analysis) and psychological perspectives (perception mining) over surface-level classification. - Formalizes causal analysis into three sub-tasks (cause detection, causal inference, cause categorization) and illustrates their application with example posts and cause categories (e.g., jobs/career, relationships, alienation). - Advocates discourse parsing, knowledge graphs, and semantic dependency–guided sentence simplification to capture nuanced causal relations in complex self-reported texts. - Frames perception mining as a longitudinal, pragmatics-driven endeavor to uncover beliefs, morals, and identities that influence mental states, demonstrating through a user timeline how perceptions (e.g., FREEDOM, ATTRACTION, ASSET) can be inferred. - Proposes explainable output formats that explicitly encode causal relationships and perception links to mental health indicators (e.g., causal relationship(neglect, suicide risk); perception mining(alienation, suicide risk)). - Compiles initial resources and tasks (e.g., CEASE, CAMS, RHMD, empathetic conversations) to seed research on causal analysis and perception mining. - Identifies major challenges: limited datasets, robust evaluation metrics for perspectives/perceptions, and deriving discourse-specific explanations from long texts.
Discussion
The paper argues that clinically useful, trustworthy MHA systems require interpretable models capable of explaining why a post signals a given mental state. By integrating causal analysis and perception mining, researchers can move from mere category assignment to explanations that reflect users’ reasons and perspectives. The authors outline a stepwise pipeline—cause detection, causal inference, and cause categorization—augmented by perception mining to contextualize and strengthen causal interpretations. They stress discourse analysis, knowledge graphs, and pragmatics as enablers of this explainability. Ethical considerations (privacy, responsibility, transparency, fairness) and the need for better datasets and evaluation frameworks are emphasized for responsible deployment in mental healthcare and for developing conversational AI that can provide empathetic, transparent support.
Conclusion
The paper presents a perspective that mental health inference from social media should be advanced via two explainability-focused dimensions: causal analysis (cause detection, inference, categorization) and perception mining (pragmatics- and discourse-informed understanding of beliefs, morals, and identity). It highlights discourse relations and knowledge graphs as core tools, and advocates perception mining as a backbone that supports causal interpretation. The authors call for building richer, transparent, and responsible models to bridge computational intelligence and clinical practice, enabling real-time, ethical applications such as conversational agents for mental health support.
Limitations
The work is limited to an abstract/theoretical perspective without implementation or empirical studies. It does not provide integrated experiments on discourse-based explainability and restricts its scope to social media language rather than clinical symptomatology or diagnoses. The authors plan future empirical work to validate and integrate discourse analyses and explainable AI methods.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny