logo
ResearchBunny Logo
Analyzing Memory Effects in Large Language Models through the Lens of Cognitive Psychology

Interdisciplinary Studies

Analyzing Memory Effects in Large Language Models through the Lens of Cognitive Psychology

Z. Cao, L. Schooler, et al.

Memory is adaptive but fallible — and this study finds that cutting-edge language models echo many human memory quirks. Using classic human-memory paradigms, the authors show LLMs display list-length and list-strength effects, associative interference, DRM-style false recognitions, and cross-domain generalization, while differing in order sensitivity and resilience to nonsense. This research was conducted by Zhaoyang Cao, Lael Schooler, and Reza Zafarani.

00:00
00:00
~3 min • Beginner • English
Abstract
Memory, a cornerstone of human cognition, is adaptive yet fallible, as exemplified by Schacter’s seven “sins” of memory. While these phenomena are well studied in psychology and neuroscience, their presence in artificial systems, especially large language models (LLMs), is underexplored. Using paradigms from human memory research, this work systematically evaluates seven phenomena in state-of-the-art LLMs and compares them to human behavior: list length effect, list strength effect, fan effect, nonsense effect, position effect, DRM-style false memories, and cross-domain generalization. We find notable alignments: both humans and LLMs remember less under higher memory load (list length effect), benefit from repeated exposure (list strength effect), and struggle with associative interference (fan effect). LLMs also show false recognitions of semantically related but unseen items (DRM false memories) and can generalize learned associations across domains. Key divergences emerge: LLMs are less sensitive to input order (limited primacy/recency) and more robust to random or meaningless material (nonsense effect). These results clarify where LLMs echo human memory reconstruction and where architectural differences produce distinct patterns of error and resilience.
Publisher
arXiv
Published On
Sep 21, 2025
Authors
Zhaoyang Cao, Lael Schooler, Reza Zafarani
Tags
memory
large language models
Schacter's seven sins
DRM false memories
list length effect
associative interference
cross-domain generalization
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny