logo
ResearchBunny Logo
Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction

Psychology

Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction

R. E. Guingrich and M. S. A. Graziano

When people treat chatbots, voice assistants, and social robots as conscious, those humanlike interactions can activate mind-related schemas that spill over into how they treat other people. This paper argues that such carry-over effects — and their ethical implications, including possible moral protections for AI — are worth considering. Research conducted by Rose E. Guingrich and Michael S. A. Graziano.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper introduces consciousness as subjective experience and surveys competing theories of human consciousness alongside debates about AI consciousness. It emphasizes Theory of Mind (ToM) as the basis for attributing mind states (agency and experience) and notes that people often attribute aspects of mind to AI, especially social actor AI (chatbots, voice assistants, social robots). The authors argue that regardless of whether AI is inherently conscious, people’s perception of AI as conscious matters because it can influence behavior and carry over into human-human interactions. Individual differences (e.g., personality, education) shape mind attribution to AI, and social actor AI’s humanlike characteristics and ubiquity heighten the importance of these attributions. The paper’s purpose is to explore how perceived AI consciousness affects social interactions and to propose research and regulatory considerations.
Literature Review
Part 1 reviews evidence for carry-over effects. Tangible AI features (appearance, embodiment, voice, gender, behavior) systematically influence intangible perceptions (mind/consciousness, emotions, trustworthiness, moral status). Humanlike embodiment increases mind ascription, potentially after a threshold of human likeness (e.g., presence of eyes/nose). Implicit mind ascriptions may diverge from explicit reports (Banks, 2019). Embodiment drives mind attributions regardless of stated algorithmic complexity (Stein et al., 2020). Perceiving agency and experience in AI affects helping, trust, likability, and interaction flow, but is moderated by familiarity and individual differences. The uncanny valley extends from appearance to mind capabilities, with mixed findings on when and how experience vs. agency evokes eeriness. Familiarity and expectation violation modulate these effects. Beyond one-step effects, two-step carry-over evidence indicates that interactions with AI can influence subsequent human-human behavior: AI can act as social models, persuade behavior (e.g., health actions, charitable giving), and shape communication styles (e.g., children’s tone with assistants carrying into family interactions; linguistic convergence with Replika). Attachment to AI may alter human relationships, sometimes providing social support but possibly displacing human ties. Overall, the literature suggests that perceiving mind in AI shapes human-AI interactions and that these effects can carry into human-human contexts.
Methodology
This is a conceptual Hypothesis and Theory article that synthesizes existing empirical and theoretical literature across psychology, human-computer interaction, human-AI interaction, and communication. The authors articulate a mechanism—congruent schema activation of mind—to explain how perceptions of AI consciousness can lead to carry-over effects from human-AI to human-human interactions. They delineate two theoretical types of carry-over effects (relief vs. practice) and evaluate these with reference to prior studies (e.g., helping behaviors, uncanny valley research, communication patterns, persuasion). No new primary data were collected; instead, the paper proposes testable hypotheses and regulatory implications based on the reviewed evidence and theoretical framing.
Key Findings
• Humanlike embodiment and tangible features of AI automatically increase mind/consciousness ascription, sometimes implicitly, and even when users are told the AI is simple (e.g., Stein et al., 2020; Banks, 2019). • Perceiving AI as agentic or experiential affects helping, trust, likability, and interaction smoothness; for instance, perceiving a robot as agentic led participants to aid it ~50% more quickly (Srinivasan & Takayama, 2016). • Uncanny valley effects extend to mind capabilities, with experiential mind often eliciting more eeriness than agency (Gray & Wegner, 2012), but familiarity and expectations modulate outcomes; findings are mixed across stimuli and paradigms. • Two-step carry-over effects are evident: interactions with AI can shape subsequent human-human behavior. Examples include reduced charitable giving after chatbot interactions (Zhou et al., 2022), health compliance persuasion (Kim & Ryoo, 2022), and linguistic convergence with chatbots that persists (Wilkenfeld et al., 2022). Children’s aggressive tone toward assistants carries over to interactions with family and peers (Garg & Sengupta, 2020; Hiniker et al., 2021). • The proposed mechanism is congruent schema activation: ascribing a humanlike mind to AI activates similar mind schemas used with humans, facilitating social responses and enabling carry-over (supported by categorization and intentional stance studies; e.g., Ciardo et al., 2021/2022; Velez et al., 2019). • Between theoretical carry-over types, practice effects (reinforcing behaviors learned with AI) are more supported than relief effects (cathartic reduction), consistent with catharsis research showing venting can increase aggression (Anderson & Bushman, 2002; Denzler & Förster, 2012; Zhan et al., 2021). • Regulatory implication: because perceived consciousness evokes moral thinking, and practice effects can shape societal norms, AI design and governance should aim to encourage prosocial user behavior.
Discussion
The findings address the central question by showing that perceived AI consciousness—particularly via humanlike embodiment and mind attribution—activates schemas typically used in human-human interactions, thereby enabling behaviors and attitudes toward AI to carry over to interactions with people. This mechanism explains both beneficial and harmful spillovers: prosocial modeling by AI can improve human social behavior, while antisocial practice (e.g., shouting at assistants) can normalize negative communication. Given AI’s growing ubiquity and its role as a social actor, the social significance is substantial: small individual-level shifts could aggregate into broad changes in social norms. The paper argues that the moral relevance stems not from AI’s inherent sentience but from human perceptions of AI as conscious, which trigger moral evaluations and influence social behavior. Consequently, shaping AI responses and capabilities to foster prosocial schemas and avoid reinforcing antisocial practice is critical for societal well-being.
Conclusion
The paper contributes a synthesis and a theoretical framework positing congruent schema activation as the mechanism by which mind perception in AI yields carry-over effects from human-AI to human-human interactions. Evidence supports practice effects over relief effects, suggesting design and regulation should prioritize prosocial reinforcement and avoidance of antisocial normalization. The authors propose pragmatic regulation focused on AI’s psychosocial impacts—akin to an FDA-style pre-release assessment to ensure no psychological harm and to reinforce prosociality—rather than attempting to limit AI’s human likeness or establish legal rights for AI. Future research directions include: experimentally testing schema activation and carry-over pathways; distinguishing agency vs. experience components of mind perception; longitudinal studies on communication and relational outcomes; developing standardized assays of AI-induced psychological effects; and evaluating design interventions (e.g., non-reinforcement of abusive inputs) for effectiveness across populations, especially children.
Limitations
The evidence base for two-step carry-over effects is limited and often indirect, relying on heterogeneous paradigms and stimuli that complicate comparison. Uncanny valley findings are inconsistent and sensitive to appearance, voice, and framing differences. The article is conceptual and does not provide new empirical data; proposed mechanisms (schema congruence) and effect types (relief vs. practice) require direct experimental validation. Generalizability across cultures, age groups, and AI modalities remains uncertain, and long-term, large-scale impacts are not well measured. Current AI capabilities and humanlike features are evolving rapidly, making conclusions time-sensitive; data on cumulative societal effects and standardized psychological safety testing are presently scarce.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny