logo
ResearchBunny Logo
Exploring the mechanism of sustained consumer trust in AI chatbots after service failures: a perspective based on attribution and CASA theories

Computer Science

Exploring the mechanism of sustained consumer trust in AI chatbots after service failures: a perspective based on attribution and CASA theories

C. Gu, Y. Zhang, et al.

This research, conducted by Chenyu Gu, Yu Zhang, and Linhao Zeng, unveils critical insights into maintaining consumer trust in AI chatbots after service failures. It integrates CASA theory with attribution theory, revealing how human-like qualities in chatbots can enhance trust, even in the face of failures. Don't miss out on these key findings for AI chatbot development!

00:00
00:00
~3 min • Beginner • English
Introduction
The paper examines how to maintain consumer trust in AI customer service chatbots after service failures. While AI chatbots are becoming ubiquitous and improve efficiency, failures in human–AI interactions can erode user trust and harm brands. Prior work emphasized functional qualities (e.g., speed, accuracy) and technology acceptance, with limited attention to social and emotional dimensions and the role of anthropomorphism in failure contexts. The study integrates CASA theory and attribution theory to investigate how perceived anthropomorphic characteristics, empathic ability, and interaction quality influence consumers’ attributions (internal vs. external) for failures and, in turn, sustained trust. It also introduces AI anxiety as a moderating cognitive factor. Hypotheses propose that internal attributions reduce, while external attributions increase, sustained trust; CASA factors shape attribution tendencies; and AI anxiety both directly reduces sustained trust and amplifies the negative effect of internal attributions on trust.
Literature Review
The theoretical framework synthesizes: (1) AI chatbots and service failure—AI’s NLP limitations can cause failures, increasing in frequency with wider deployment, with consequential trust and brand impacts. (2) Attribution theory—users explain failures via internal (AI capability) versus external (environment/other agents) causes; internal attributions are expected to decrease sustained trust due to perceived stability of AI limitations, while external attributions may not. (3) CASA theory—people respond socially to computers; social cues from AI (anthropomorphism, empathy, interaction quality) influence perceptions and behaviors similar to interpersonal contexts. (4) Effects of CASA factors on attribution—anthropomorphism may shift blame away from internal AI capability toward external causes; empathy can reduce blame and increase forgiveness; positive interaction quality shapes expectations and para-social bonds, potentially favoring external attributions. (5) AI anxiety—defined as user anxiety toward AI/robots; prior research links anxiety to negative attitudes and lower acceptance. The study hypothesizes that AI anxiety reduces sustained trust and moderates the internal attribution–trust link. The model includes covariates (gender, age, education, average daily internet usage).
Methodology
Design: Cross-sectional online survey with two parts. Part 1 measured relatively stable perceptions (CASA factors and AI anxiety). Part 2 was completed only by respondents who had experienced AI chatbot service failures, measuring attributions (internal/external) and sustained trust. Respondents were asked to recall their most memorable AI chatbot service failure to contextualize responses. Sampling and procedure: Volunteers were recruited via multiple social media platforms (e.g., Facebook, TikTok) to broaden demographics. Inclusion and data-quality criteria: (1) confirmed experience with AI chatbot service failure; (2) passed screening questions; (3) response time > 60 s; (4) no straight-lining for more than 8 consecutive items. Of 600 collected questionnaires, 462 valid responses remained (response rate 77%). Sample characteristics: 49.1% male (n=227), 50.9% female (n=235); age 18–26 (58.7%), 27–40 (32.9%), 41–55 (8.0%), >56 (0.4%); education: middle/high school (8.2%), undergraduate (60.6%), master/doctor (31.2%); daily internet use: <1h (1.9%), 1–3h (36.1%), 3–5h (49.8%), >5h (12.1%). Measures: Seven latent variables on 7-point Likert scales (validated/adapted scales): perceived anthropomorphic characteristics (6 items; e.g., feeling the chatbot has emotions; Wang, 2017), perceived empathic abilities (4 items; e.g., understands my feelings; Simon, 2013), perceived interaction quality (4 items; e.g., effective two-way communication; Kim & Baek, 2018), AI anxiety (5 items; e.g., concern about privacy disclosure; Song & Kim, 2022), internal attribution (4 items; e.g., inadequate algorithms; Lei & Rau, 2021), external attribution (4 items; e.g., unclear consumer expression; Lei & Rau, 2021), sustained trust (3 items; e.g., intend to continue using; Koufaris & Hampton-Sosa, 2004). Analysis: Partial Least Squares Path Modeling (PLS-PM) using SmartPLS due to exploratory aims, model complexity, distributional robustness, and moderate sample size. Reliability/validity assessed via loadings, Cronbach’s alpha, composite reliability (CR), average variance extracted (AVE), VIF, and discriminant validity (Fornell–Larcker). Common method bias assessed via Harman’s single-factor test. Model fit indices included R2, Q2, SRMR, RMS Theta, CFI, TLI, NFI. Hypotheses tested via bootstrapping (5,000 resamples; 95% CIs). Mediation tested with indirect effects; moderation tested via interaction terms. Covariates controlled: gender, age, education, average daily internet usage.
Key Findings
Measurement model: Item loadings ranged 0.731–0.958; Cronbach’s alpha 0.778–0.958; CR > 0.7 for all constructs; AVE > 0.5; VIF < 10. Discriminant validity supported (sqrt(AVE) > inter-construct correlations). Harman’s single-factor test: first factor explained 29.012% (<40%), indicating no serious common method bias. Model fit: R2 for endogenous constructs > 0.1; Q2 > 0; SRMR = 0.054 (<0.08); RMS Theta = 0.115; CFI = 0.923, TLI = 0.912, NFI = 0.903 (all > 0.9), indicating good fit and predictive relevance. Direct effects: Internal attribution → sustained trust (β = -0.409, p < 0.001); External attribution → sustained trust (β = 0.429, p < 0.001). Anthropomorphic characteristics → internal attribution (β = -0.158, p = 0.006); Anthropomorphic characteristics → external attribution (β = 0.336, p < 0.001). Empathic ability → internal attribution (β = 0.029, p = 0.650, ns); Empathic ability → external attribution (β = 0.107, p = 0.013). Interaction quality → internal attribution (β = 0.050, p = 0.405, ns); Interaction quality → external attribution (β = 0.349, p < 0.001). AI anxiety → sustained trust (β = -0.472, p < 0.001). Mediation: Anthropomorphic characteristics → internal attribution → sustained trust (indirect effect = 0.064, 95% CI [0.019, 0.120], p = 0.013). Anthropomorphic characteristics → external attribution → sustained trust (0.144, [0.087, 0.204], p < 0.001). Empathic ability → external attribution → sustained trust (0.046, [0.010, 0.084], p = 0.013); empathic ability via internal attribution not supported (ns). Interaction quality → external attribution → sustained trust (0.150, [0.099, 0.207], p < 0.001); via internal attribution not supported (ns). Moderation: AI anxiety moderates internal attribution → sustained trust (interaction β = -0.205, p < 0.001), such that the negative impact of internal attribution on sustained trust is stronger at higher AI anxiety (+1 SD) and weaker at lower AI anxiety (-1 SD). Hypotheses: Supported—H1a, H1b, H2a–H2d, H3b, H3d, H4b, H4d, H5a, H5b. Not supported—H3a, H3c, H4a, H4c.
Discussion
Findings show that CASA-related social cues displayed by AI chatbots can help maintain consumer trust after service failures by shifting users’ causal attributions. Internal attributions (blaming AI capability) erode sustained trust, whereas external attributions (environment/other agents) support it. Perceived anthropomorphic characteristics both reduce internal attributions and increase external attributions, thereby sustaining trust. Perceived empathy and prior interaction quality foster external attributions but do not significantly reduce internal attributions, indicating that higher-level social cues may set higher expectations that are not sufficient to lessen inferences about AI capability. The study contrasts with research suggesting anthropomorphism exacerbates negative evaluations under anger (e.g., Crolic et al., 2022), proposing boundary conditions such as user emotional state and failure severity. In less severe failures or non-anger contexts, anthropomorphic cues may humanize chatbots, prompting leniency and external attributions. AI anxiety emerges as a crucial psychological factor: anxious users not only trust less overall but also react more negatively when failures are internally attributed to AI capability. Practically, the results underscore the importance of designing social cues that promote benevolent attributions and managing user expectations and anxiety to preserve trust after inevitable failures.
Conclusion
This study integrates CASA and attribution theory to explain how social interaction cues of AI chatbots influence sustained trust following service failures. It demonstrates that anthropomorphism, empathy, and interaction quality can sustain trust primarily by promoting external attributions of failure, while internal attributions diminish trust. AI anxiety directly lowers trust and strengthens the negative effect of internal attributions on trust. The work advances understanding of human–AI communication by highlighting social factors and individual differences and offers practical guidance: design anthropomorphic and empathic interfaces that encourage constructive attributions, and tailor communication to users with higher AI anxiety through transparency, reassurance, and literacy-building. Future research should examine other trust dimensions (initial, cognitive, behavioral), employ experimental and longitudinal designs to clarify causality and dynamics, broaden sampling beyond social media to enhance generalizability, and jointly model functional and social factors to capture their combined effects on consumer psychology and behavior.
Limitations
- Focus on sustained trust only; other dimensions (initial, cognitive, behavioral) were not examined. - Cross-sectional survey limits causal inference; trust is dynamic. - Potential unmeasured alternative explanations despite covariate controls. - Social media-based recruitment skews younger participants, limiting generalizability to older or less active users. - Failure context severity and emotional states were not experimentally manipulated; boundary conditions remain to be tested. - Future work should use experimental/longitudinal designs and integrate functional with social factors.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny