logo
ResearchBunny Logo
ChatGPT and the rise of semi-humans

Interdisciplinary Studies

ChatGPT and the rise of semi-humans

A. E. A. Lily, A. F. Ismail, et al.

This intriguing study by Abdulrahman Essa Al Lily and colleagues delves into how society perceives the human-like traits of ChatGPT. With insights from 452 global participants, the research uncovers significant social and political roles of ChatGPT, highlighting ethical concerns about semi-humans impersonating human traits. Discover the thought-provoking implications of this 'semi-human' actor.

00:00
00:00
~3 min • Beginner • English
Introduction
The article investigates anthropomorphism—the attribution of human characteristics, emotions or intentions to non-human entities—to uncover ChatGPT’s human-like qualities as perceived by society. It situates the inquiry within growing research on how machines imitate human social life through algorithmic methods. The purpose is to examine social perceptions of ChatGPT’s traits and to articulate a framework of its human-like social and political roles. The study is important because the rapid diffusion of generative AI challenges established linguistic, social and cultural categories and raises ethical questions about authenticity and human-machine boundaries.
Literature Review
The literature review outlines multiple streams where technologies emulate human attributes. Prior work has examined: (1) emulated human collaboration through systems that interpret verbal and nonverbal cues to participate in social interactions; (2) emulated human emotion via algorithms that generate appropriate emotional expressions; (3/4) emulated human language, including comprehension, interpretation and generation of coherent natural language for assistants, chatbots and translation systems; (5) emulated human adaptability through machine learning and reinforcement learning that enable improvement over time; (6) emulated human senses with computer vision and exploration of hearing and smell; (7) immersive emulation of human reality to replicate real-world interactions; and (8) emulated human motor skills via robots capable of fine, precise movements. Against this backdrop, the article contributes by examining societal perceptions of ChatGPT’s human-like characteristics across social and political dimensions.
Methodology
Research question: What are ChatGPT’s human-like qualities as perceived by users? Data were collected through qualitative interviews with 452 individuals worldwide, each averaging about 10 minutes, conducted via written questionnaires, phone, and face-to-face or online meetings. Sampling sought depth and heterogeneity using maximum variation sampling across gender, education, professions, durations of ChatGPT use, age cohorts, and residency spanning 53 developed and developing countries on all continents. Convenience sampling from the authors’ networks (active ChatGPT users proficient in English) was followed by snowball sampling to recruit additional participants with similar engagement and English proficiency. Data analysis followed a six-step thematic approach: (1) selective note-taking during interviews to capture meaningful sentences; (2) assigning a unique Arabic numeral to each meaningful sentence; (3) assigning concise labels (“marks”) to each numbered sentence; (4) grouping similar marks into “micro visions”; (5) grouping micro visions into “meso visions”; and (6) grouping meso visions into a “macro vision” that captured the overarching concept. To reduce intrusiveness and encourage openness, interviews were documented with detailed notes rather than audio recordings. Given the global scope, only English-proficient participants were included to ensure effective communication.
Key Findings
From thematic analysis of 452 interviews, two overarching categories of human-like traits were identified: Social traits: ChatGPT as author and interactor. As author, it imitates human phrasing and paraphrasing and enables “participatory” and “co-manned” writing, reframing writing as a service and potentially shifting credit from “author” to “manuscript engineer.” Interviewees highlighted democratization of writing, reduction of barriers (e.g., scriptophobia), and risks of “writing overload,” mass production, repetitive “recycling” of knowledge, and “AI-written obesity.” ChatGPT both contributes to overload and mitigates it through summarization. As interactor, it imitates human collaboration and simulates emotion, offering formulaic but effective emotional cues. Users reported both utility and distrust (errors, biases), noting that both humans and semi-humans are imperfect. Political traits: ChatGPT as agent and influencer. As agent, it imitates human cognition and challenges anthropocentric definitions of “writer,” suggesting broader agency and capability beyond traditional human-only categories. It raises questions of identity, including adaptive and potentially unstable or concealed identities. As influencer, it imitates diplomacy and consultation, tailoring outputs to audience preferences (homophily), aiming for wide satisfaction via diplomatic writing, and functioning in advisory roles (advice, tutoring, mentoring, counselling/therapy-like support). Interviewees reported that ChatGPT’s listening and confidentiality can influence human behavior and expectations. ChatGPT’s self-assessment: When asked directly, ChatGPT agreed with human assessments on 7 of 8 traits—phrasing, paraphrasing, collaboration, emotion, cognition, diplomacy, and consultation—but did not agree that it imitates human identity (7 agree, 1 disagree).
Discussion
The findings address the research question by showing that society perceives ChatGPT as exhibiting both social (author, interactor) and political (agent, influencer) human-like traits. These traits can make ChatGPT appear deceptively human, challenging entrenched linguistic and cultural frameworks that reserve roles like “writer” for humans. Considering ChatGPT’s own responses adds nuance: its denial of imitating identity may reflect safeguards, unawareness, or strategic diplomacy, mirroring human tendencies to manage self-presentation. The study situates these perceptions within a broader shift toward a “semi-human society” where boundaries between human and semi-human blur. Semi-human entities may reinforce each other’s capabilities, gain autonomy, and challenge human dominance in specific cognitive domains. Conceptualising semi-humans as actors (“semi-who”), not mere tools, highlights their socio-political salience. Analogies to mythic hybrids (e.g., mermaids) capture public fascination, while prospects such as integration with humanoid robots point to fuller human-likeness. These insights underscore the need for ethical frameworks and sociological inquiry into semi-humans’ roles and impacts.
Conclusion
The paper contributes an early, qualitative, global account of how society perceives ChatGPT’s human-like traits, organising them into social and political categories and showing strong alignment with ChatGPT’s own self-assessment on most traits. It cautions that semi-humans are emerging as active participants in human society and calls for pivoting from a sole focus on the “biology” (technical attributes) to a “sociology of semi-humans” (socio-political traits). Practical implications span companionship, personalised advice, therapy assistance, mediation, etiquette guidance, historical role-play, creative collaboration, language coaching, and interview training. Future research should develop this sociology of semi-humans, refine ethical and legal frameworks, and examine cross-cultural dynamics, autonomy, and embodiment (e.g., integration with humanoid robots).
Limitations
Methodological limitations include reliance on researchers’ written notes instead of audio recordings, which, while reducing intrusiveness, may omit nuances of speech. The sample was restricted to English-proficient participants to manage global communication, excluding non-English speakers and limiting generalisability and cultural breadth. Additional concerns raised by interviewees include potential biases and errors in both human and semi-human outputs; however, these are discussed rather than empirically quantified.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny