logo
ResearchBunny Logo
Introduction
The increasing prevalence of AI chatbots in customer service, driven by advancements in AI technology and the need for cost-effective, efficient solutions, particularly accelerated by the COVID-19 pandemic, makes understanding consumer trust crucial. While AI chatbots offer 24/7 service and reduced response times, their inherent limitations in natural language processing often lead to service failures. These failures can damage brand image and diminish user trust, a critical factor in technology acceptance and consumer behavior. This study addresses the underexplored area of trust maintenance in AI chatbots after service failures, focusing on the role of social and emotional dimensions often neglected in previous research which primarily focused on response speed and accuracy. The study also tackles the limited research on applying anthropomorphism and social presence to mitigate the negative impact of failures and addresses the shortcomings of traditional Technology Acceptance Models (TAM) in fully capturing social intelligence in AI interactions. Therefore, this research aims to systematically examine the mechanisms for maintaining consumer trust in AI chatbots following service failures, bridging a significant gap in the existing literature and providing valuable insights for both theoretical understanding and practical development.
Literature Review
The literature review examines existing research on AI chatbots and service failures, consumer attribution of failures, and the Computers as Social Actors (CASA) theory. Existing research highlights that customer reactions to failures are shaped by expectations, with anthropomorphic features often raising expectations and leading to negative outcomes when unmet. Attribution theory is introduced, differentiating between internal (attributing failures to AI capabilities) and external (attributing failures to external factors) attributions. CASA theory provides a framework for understanding how humans interact with computers as if they were social actors, highlighting the influence of perceived anthropomorphic characteristics, perceived empathic abilities, and perceived interaction quality on user perceptions and responses. The literature review sets the stage for hypothesizing that CASA factors influence user attribution styles in service failure contexts and further impact sustained trust. It's noted that while previous research showed a tendency to attribute failures to machines, the increasing human-likeness of modern AI might alter this dynamic.
Methodology
This study employs a cross-sectional survey method to investigate the impact of CASA factors on the attribution of service failures and sustained trust in AI chatbots. Participants, recruited through social media platforms (Facebook and TikTok), were screened to ensure they had experienced service failures with AI chatbots (resulting in 462 valid responses from an initial 600). The questionnaire measured: (1) CASA factors (perceived anthropomorphic characteristics, perceived empathic abilities, and perceived interaction quality); (2) AI anxiety; (3) internal and external attribution styles for chatbot failures; and (4) sustained trust in AI chatbots after failures. All variables were measured using a 7-point Likert scale, based on previously validated scales adapted for AI chatbot service failure scenarios. Partial Least Squares Path Modeling (PLS-PM) using SmartPLS was employed for data analysis, chosen for its suitability for handling complex models with multiple latent and manifest variables, flexibility with non-normally distributed data, and applicability to smaller sample sizes. Mediation and moderation analyses, utilizing bootstrapping (5000 samples) for mediation, were performed to test the hypotheses. Covariates (gender, age, education, internet usage) were included to control for confounding variables. The reliability and validity of the measurement scales were rigorously assessed, including factor loadings, Cronbach's alpha, composite reliability, average variance extracted (AVE), variance inflation factor (VIF), and Harman's single-factor test to address common method bias.
Key Findings
The study's key findings support several hypotheses. First, internal attribution of AI chatbot failures is negatively correlated with sustained trust (β = -0.409, p = 0.000), while external attribution is positively correlated (β = 0.429, p = 0.000). Second, perceived anthropomorphic characteristics negatively correlate with internal attributions (β = -0.158, p = 0.006) and positively correlate with external attributions (β = 0.336, p = 0.000), mediating the impact on sustained trust. Third, while perceived empathic abilities and perceived interaction quality positively correlate with external attributions (β = 0.107, p = 0.013 and β = 0.349, p = 0.000 respectively), mediating the impact on sustained trust, their correlation with internal attributions is not significant. Fourth, AI anxiety negatively correlates with sustained trust (β = -0.472, p = 0.000) and significantly moderates the impact of internal attributions on sustained trust (β = -0.208, p = 0.000). The moderation effect indicates that the negative impact of internal attributions on trust is stronger for individuals with higher AI anxiety.
Discussion
The findings demonstrate the importance of CASA factors in maintaining consumer trust in AI chatbots after service failures, primarily through their influence on attribution styles. Anthropomorphic characteristics, particularly, appear effective in reducing the likelihood of blaming the AI's internal capabilities for failures, promoting external attributions and thereby sustaining trust. The mediating role of attribution highlights the cognitive processes involved in trust formation and maintenance in human-AI interactions. The contrasting impacts of anthropomorphism observed in this study versus those reported by other studies highlight the nuances of this complex relationship, indicating that the level of anthropomorphism, the severity of the failure, and the emotional state of the customer all play significant roles. The moderating role of AI anxiety underscores the importance of considering individual psychological traits when designing and implementing AI customer service systems. The results expand understanding of human-AI interaction and contribute to the development of more user-friendly and trustworthy AI systems.
Conclusion
This study provides valuable theoretical and practical contributions to the field of human-AI interaction. The integration of CASA and attribution theories offers a comprehensive framework for understanding sustained trust in AI chatbots after service failures. The findings highlight the importance of designing AI chatbots with increased anthropomorphism, empathy, and high-quality interaction, while also considering the moderating role of AI anxiety. Future research could explore other dimensions of trust, utilize longitudinal studies to better understand causality, investigate alternative explanatory factors, and broaden the sample to enhance generalizability. It is vital to integrate functional and social factors for a more comprehensive understanding of consumer psychology and behavior in the context of AI customer service.
Limitations
The study's limitations include its cross-sectional nature, which limits the ability to definitively establish causality. The reliance on self-reported data from a sample recruited predominantly through social media might introduce biases, potentially affecting the generalizability of findings. The study also primarily focused on sustained trust, overlooking other dimensions of trust. Finally, while covariates were included, other unmeasured factors may influence the relationships explored. Future research using experimental designs and longitudinal studies could address these limitations.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny