logo
ResearchBunny Logo
Signs of consciousness in AI: Can GPT-3 tell how smart it really is?

Computer Science

Signs of consciousness in AI: Can GPT-3 tell how smart it really is?

L. Bojić, I. Stojković, et al.

Could GPT-3 be showing the first signs of subjectivity? This study reports objective and self-assessment tests of cognitive and emotional intelligence showing GPT-3 surpasses average humans on acquired-knowledge tasks but matches average human logical reasoning and EI, with mismatched self-evaluations—hinting at emerging AI traits. This research was conducted by Ljubiša Bojić, Irena Stojković, and Zorana Jolić Marjanović.

00:00
00:00
~3 min • Beginner • English
Abstract
The emergence of artificial intelligence (AI) is transforming how humans live and interact, raising both excitement and concerns—particularly about the potential for AI consciousness. For example, Google engineer Blake Lemoine suggested that the AI chatbot LaMDA might become sentient. At that time, GPT-3 was one of the most powerful publicly available language models, capable of simulating human reasoning to a certain extent. The notion of GPT-3 having some degree of consciousness could be linked to its ability to produce human-like responses, hinting at a basic level of understanding. To explore this further, we administered both objective and self-assessment tests of cognitive (CI) and emotional intelligence (EI) to GPT-3. Results showed that GPT-3 outperformed average humans on CI tests requiring the use and demonstration of acquired knowledge. However, its logical reasoning and EI capacities matched those of an average human. GPT-3's self-assessments of CI and EI didn't always align with its objective performance, with variations comparable to different human subsamples (e.g., high performers, males). A further discussion considered whether these results signal emerging subjectivity and self-awareness in AI. Future research should examine various language models to identify emergent properties of AI. The goal is not to discover machine consciousness itself, but to identify signs of its development, occurring independently of training and fine-tuning processes. If AI is to be further developed and widely deployed in human interactions, creating empathic AI that mimics human behavior is essential. The rapid advancement toward superintelligence requires continuous monitoring of AI's human-like capabilities, particularly in general-purpose models, to ensure safety and alignment with human values.
Publisher
Humanities and Social Sciences Communications
Published On
Dec 02, 2024
Authors
Ljubiša Bojić, Irena Stojković, Zorana Jolić Marjanović
Tags
GPT-3
artificial intelligence consciousness
cognitive intelligence
emotional intelligence
self-assessment
language models
AI alignment
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny