logo
ResearchBunny Logo
Is it possible for people to develop a sense of empathy toward humanoid robots and establish meaningful relationships with them?

Psychology

Is it possible for people to develop a sense of empathy toward humanoid robots and establish meaningful relationships with them?

E. Morgante, C. Susinna, et al.

This systematic review finds that humans can perceive robots as having emotional states—especially when robots identify and respond to human feelings and exhibit anthropomorphic traits, which tends to increase empathy. The research was conducted by Elena Morgante, Carla Susinna, Laura Culicetto, Angelo Quartarone, and Viviana Lo Buono.

00:00
00:00
~3 min • Beginner • English
Introduction
The review examines whether humans can develop empathy toward humanoid and social robots and form meaningful relationships with them. Empathy is framed as a multidimensional construct with affective (sharing emotions) and cognitive (understanding perspectives) components, supported by neural systems including anterior insula and anterior cingulate. As robots acquire social capabilities, people attribute intentions and emotional meaning to them; mirror neuron research shows overlap in brain responses to human and robotic actions, suggesting anthropomorphic robots can elicit empathic reactions. The review considers two perspectives: how humans empathize with human-like agents, and how robots designed with empathic behaviors affect humans. The research question focuses on conditions and design features that induce or express empathy within HRI and the extent to which such empathy can support meaningful human-robot relationships.
Literature Review
Prior studies indicate anthropomorphism (human-like facial expressions, gestures, voice modulation) increases perceived relatability and emotional expressivity of robots, fostering empathic responses. Neuroimaging shows the mirror neuron system responds to robotic actions similarly to human actions, supporting simulation mechanisms of empathy. Designing empathic social robots has explored emotional mirroring, facial expression reproduction, and narrative strategies to elicit compassion. The Uncanny Valley effect may reduce empathy and increase discomfort when robots are highly human-like, potentially generating unrealistic expectations of robot cognition. People frequently attribute mental states to robots based on appearance, behavior, and social interactivity, though they may explicitly deny that robots possess minds; mental state attribution is closely related to anthropomorphism. Frameworks like SISI distinguish simulated forms of social processes (approximation, mimicry, representation) from full sociality, highlighting limits in replicating the subtleties of human interactions. Even non-humanoid robots can evoke empathy when they provide help or comfort. Emerging work in healthcare and education underscores potential benefits of empathic robots but raises ethical concerns around emotional manipulation and the need for careful design.
Methodology
Design: Systematic review conducted according to PRISMA guidelines. Databases: PubMed, Web of Science, Embase (Scopus noted generally), considering articles published between 2004 and 2023; only English-language texts. Search terms combined empathy, humans, and robot/robotics variants, with filters. Screening: Titles, abstracts, and full texts were reviewed independently by two investigators (EM and CS); disagreements were resolved by a third researcher (VLB). Inclusion criteria: studies with healthy adult populations and a psychometric assessment of empathy. Exclusion criteria: studies involving children; case reports and reviews. Data extraction: Performed into Microsoft Excel (Version 2021) including study identifiers, aims, design, duration, recruitment, criteria, consent, conflicts/funding, intervention/control type, participant numbers and characteristics, outcomes, assessment time points, results, and conclusions. Inter-rater reliability: Agreement between reviewers assessed using the kappa statistic with substantial agreement threshold >0.61, achieving excellent concordance. PRISMA flow: Records identified = 484 (PubMed 89; Web of Science 335; Embase 60). Removed before screening: duplicates 89; reviews 12. Records screened = 383; excluded by title 265; excluded by abstract 77. Full-text reports assessed for eligibility = 41; excluded: inadequate study design 17; not including children 5; not including patients 5; not focused on psychological aspects 1. Included studies = 11.
Key Findings
From 484 records, 11 studies met inclusion criteria. Overall, robots that accurately recognize and respond to human emotional states evoke greater empathy; anthropomorphic traits and expressive features further increase empathic engagement. Empathy in HRI emerges via two orientations: (1) expression of empathy—humans perceive the robot as empathic; and (2) induction of empathy—the robot's prior emotional expression elicits empathy in humans. Notable study-level findings: • Birmingham et al. (2022; N=111): Affective empathic statements from a robot were rated as more empathic than cognitive ones (RoPE scale). • Mollahosseini et al. (2018; N=16): Integrating automated facial expression recognition increased perceived robot empathy and likability (Ryan Companionbot). • Leite et al. (2013; N=40): An empathic companion robot (iCat) improved perceptions of companionship, alliance, and self-validation during a chess game. • Tsumura and Yamada (2022; N=578): Greater task difficulty increased human affective empathy toward an agent, independent of task content (modified IRI). • Konijn and Hoorn (2020; N=265): Detailed facial articulacy influenced responsiveness; humans showed less empathic responsiveness to robots than to humans (Robot Alice, Nao/Zora). • García-Corretjer et al. (2023; N=18): Active collaboration with Robobo fostered trust under uncertainty and teamwork attitudes (TEQ). • Erel et al. (2022; N=64): Non-humanoid robotic gestures enhanced emotional support in human-human interaction. • Spaccatini et al. (2023; N=269): Anthropomorphization (appearance/behavior) increased attribution of experience/agency and influenced empathy toward distressed individuals; more anthropomorphized robots produced higher empathy. • Moon et al. (2021; N=48): Non-verbal cues conveying appropriate negative emotion had decisive effects on perceived emotion, empathy, and behavior inducement (Hubot). • Frederiksen et al. (2022; N=220): A sad affective narrative increased empathy and willingness to help the robot (Kuri). • Corretjer et al. (2020; N=10): Collaborative maze-solving scenarios with Robobo supported development of empathy through mutual understanding, listening, and joint performance, even without anthropomorphic features. Cross-cutting insights: Non-verbal cues, expressive facial articulacy, affective narratives, and task structure (difficulty, collaboration) modulate empathy. Anthropomorphism generally boosts empathy but may encounter Uncanny Valley constraints at high human-likeness.
Discussion
Findings address the central question by showing humans can experience empathy toward robots, particularly when robots convey or infer emotional states and display anthropomorphic and expressive features. Empathy in HRI is bidirectional: humans empathize with robots, and robots can be designed to simulate empathic responses, improving trust, engagement, and perceived sociality. However, robots lack genuine emotional experiences and intentionality; their empathic behaviors are algorithmic simulations rather than innate capacities. Anthropomorphism supports mental state attribution and social bonding but may also trigger the Uncanny Valley, reducing empathy and increasing discomfort when human-likeness is too high or multimodal cues create unrealistic expectations. Mental state attribution depends on robot appearance and behavior, aiding prediction and control but revealing a tension between implicit mind inference and explicit denial of robot minds. Practical implications span healthcare, education, and social care, where empathic robots may facilitate companionship, therapeutic support, and care for vulnerable populations. Ethical considerations include potential emotional manipulation and the need for designs that promote positive, non-exploitative interactions.
Conclusion
Humans can develop empathic responses toward robots, especially when robots detect and appropriately respond to human emotions and exhibit anthropomorphic traits, expressive behavior, and relevant non-verbal cues or narratives. Empathy supports more engaging and trusting HRI, yet currently remains simulated on the robot side due to the absence of authentic emotional understanding and logical subjectivity. The review highlights variability in definitions and measures of empathy in HRI and underscores the need for standardized assessment protocols and robust methodologies. Future research should refine theoretical definitions, develop validated instruments to measure perceived and induced empathy, investigate boundary conditions (including Uncanny Valley), and explore ethical frameworks for empathic robot design. Advances in affect recognition, multimodal communication, and context-aware interaction models will help create emotionally intelligent robots that support meaningful, beneficial human-robot relationships.
Limitations
Empathy is not directly observable and lacks a clear, consensus definition in HRI, complicating measurement and interpretation. The overall evidence quality was low with substantial heterogeneity in definitions, tools, and outcome measures across studies, precluding meta-analysis. Assessments often relied on perceived empathy reported by human participants, introducing subjectivity and measurement controversy. Protocols and instruments are not standardized, and varying designs and contexts limit generalizability. Additionally, anthropomorphism may introduce biases (e.g., Uncanny Valley, unrealistic expectations), and ethical concerns arise regarding emotional manipulation.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny