logo
ResearchBunny Logo
Robots are both anthropomorphized and dehumanized when harmed intentionally

Psychology

Robots are both anthropomorphized and dehumanized when harmed intentionally

M. S. Wieringa, B. C. N. Müller, et al.

This research delves into the intriguing 'harm-made mind' phenomenon, revealing that witnessing harm to robots enhances their perceived ability to feel pain. Conducted by Marieke S. Wieringa, Barbara C. N. Müller, Gijsbert Bijlstra, and Tibor Bosse, the study uncovers a complex interplay between anthropomorphism and dehumanization, ultimately questioning the prosociality of those who harm robotic entities.

00:00
00:00
Playback language: English
Introduction
The increasing integration of robots into society necessitates understanding how humans perceive and respond to their treatment. This study focuses on the "harm-made mind" phenomenon, a cognitive bias where witnessing intentional harm towards an agent, even one not typically considered to possess a mind, leads to an increased attribution of mental states. This phenomenon was first observed in reactions to the destruction of hitchBOT, a social experiment robot. The study's central question is whether the harm-made mind effect is amplified by a robot's ability to detect and simulate emotions and whether harming a robot influences the perception of the harmer's prosociality. Understanding this is crucial for designing ethical robots and informing legislation regarding their use. The study builds upon the theory of dyadic morality, which proposes that moral judgments are based on a dyad: a moral agent and a moral patient. Intentional harm creates a moral dyad, automatically associating the victim with the capacity for pain, even if it is not usually attributed to the victim. The authors hypothesize that robots capable of detecting and simulating emotions will elicit a stronger harm-made mind effect and that harming a robot will negatively impact the perceived prosociality of the perpetrator.
Literature Review
Existing research demonstrates that intentional harm towards robots, PVS patients, and even deceased individuals leads to increased mind perception in the victims. However, effect sizes varied across studies, potentially due to differences in stimuli. A key variable may be whether the robot displays emotional capabilities. Previous research suggests that attributing affective states to robots increases their perceived moral status and decreases the likelihood of their sacrifice in moral dilemmas. This raises concerns about the psychological manipulation potential of emotion-simulating robots. The theory of dyadic morality provides a framework for understanding the harm-made mind effect. This theory posits that moral acts involve a dyad of a moral agent (perceived as having agency) and a moral patient (perceived as having experience, especially the capacity for pain). When witnessing intentional harm, observers automatically complete the dyad by attributing pain capacity to the victim, even if they lack it. Studies using robotic and human-like avatars have replicated these findings, although with smaller effect sizes than initial studies.
Methodology
The researchers conducted two experiments, both close replications of a prior study investigating the harm-made mind effect, with an extension focusing on the role of emotion-simulation ability in robots. Both experiments used a 2 (Harm: yes/no) x 2 (Emotions: yes/no) between-subjects design. Experiment 1 recruited 429 participants (after exclusions) and Experiment 2 recruited 677 participants (after exclusions) via the Prolific online platform. Participants read vignettes describing a robot named George, with variations in whether George could detect and simulate emotions and whether he was harmed (or cared for) by his caretaker/creator (Dr. Richardson). Dependent variables included perceived capacity for pain in the robot, general mind perception (agency and experience dimensions), and perceived prosociality of Dr. Richardson. Mind perception was measured using a modified version of a scale assessing agency and experience capacities, with principal component analysis used to create subscales for each. Perceived prosociality was measured using an established prosociality scale. A manipulation check assessed the perceived moral wrongness of Dr. Richardson's actions. Statistical analyses included ANOVAs and mediation analyses to test the hypotheses.
Key Findings
Both experiments replicated the harm-made mind effect: intentional harm to a robot increased the perceived capacity for pain. However, there was no significant interaction between harm and the robot's ability to simulate emotions. This suggests that the harm-made mind effect is robust and not significantly dependent on the robot's emotional capabilities. Interestingly, there were conflicting direct and indirect effects of harm on overall mind perception: a positive indirect effect (mediated by perceived pain capacity) and a negative direct effect. This indicates both anthropomorphization (attributing human-like traits due to harm) and dehumanization (justifying harm by denying mental capacity). Further analysis revealed this negative direct effect mainly impacted the "experience" dimension of mind perception. Regarding perceived prosociality, harming the robot consistently led to lower perceived prosociality of the moral agent. There was no consistent interaction effect of emotion-simulation ability on this. In short: Harm increased perceived pain capacity (H1 supported); Emotion simulation did not modulate the harm-made mind effect (H2 not supported); Perceived pain capacity mediated the effect of harm on mind perception (H3 supported); and harming a robot decreased perceived prosociality of the agent (H4 supported), but no significant interaction effect was found between emotion simulation and harm on prosociality perception (H5 rejected).
Discussion
The consistent replication of the harm-made mind effect supports the theory of dyadic morality. The finding that the effect was not significantly affected by the robot's ability to simulate emotions suggests that the basic mechanism of dyadic completion is quite robust. The conflicting direct and indirect effects of harm on mind perception highlight a potential tension between automatic empathetic responses and post-hoc rationalizations to justify harmful actions. The negative direct effect on mind perception might reflect dehumanization, a process used to justify harming an agent by denying its capacity for pain. The smaller effect sizes compared to earlier research may reflect the more ambiguous nature of modern robots with less clearly human-like physical forms. The consistent finding that harming a robot is negatively associated with perceived prosociality suggests that this interaction may influence moral judgment more generally. Further investigation is needed to explore individual differences and the role of implicit versus explicit processes in shaping these perceptions.
Conclusion
This research demonstrates the robustness of the harm-made mind effect in robots, irrespective of their ability to simulate emotions. The unexpected finding of both anthropomorphism and dehumanization highlights the complexities of human-robot interaction. Future studies should explore the influence of reflective processes, feelings of uncanniness, and the effect of various experimental manipulations on the generalizability and strength of the results. The consistent finding of decreased perceived prosociality when harming a robot suggests important implications for the ethical considerations of human-robot interaction.
Limitations
The use of textual vignettes, while allowing for a close replication of previous research, may not fully capture the complexity of real-world human-robot interaction. More naturalistic stimuli (e.g., videos of actual robot interactions) might reveal different results, particularly regarding the interaction between harm and emotion simulation. The cross-sectional nature of the study and its focus on group-level analysis limit insights into individual differences and the dynamics of person-level processes. The lack of a neutral control condition (no action taken towards the robot) might affect the interpretation of the no-harm condition's results. Finally, the sample may not be fully representative of the entire global population.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny