logo
ResearchBunny Logo
Shared computational principles for language processing in humans and deep language models

Psychology

Shared computational principles for language processing in humans and deep language models

A. Goldstein, Z. Zada, et al.

Discover groundbreaking insights into how the human brain and innovative deep language models (DLMs) converge in processing natural narratives. This fascinating study led by authors from Princeton University and Google Research uncovers the shared computational principles that enable both to predict language, enhancing our understanding of neural processing.

00:00
00:00
~3 min • Beginner • English
Abstract
Departing from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECOG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language.
Publisher
Nature Neuroscience
Published On
Mar 07, 2022
Authors
Ariel Goldstein, Zaid Zada, Eliav Buchnik, Mariano Schain, Amy Price, Bobbi Aubrey, Samuel A. Nastase, Amir Feder, Dotan Emanuel, Alon Cohen, Aren Jansen, Harshvardhan Gazula, Gina Choe, Aditi Rao, Catherine Kim, Colton Casto, Lora Devore, Adeen Flinker, Liat Hasenfratz, Omer Levy, Avinatan Hassidim, Sasha Fanda, Werner Doyle, Daniel Friedman, Patricia Dugan, Lucia Melloni, Roi Reichart, Michael Brenner, Yossi Matias, Kenneth A. Norman, Orrin Devinsky, Uri Hasson
Tags
deep learning
language models
human brain
next-word prediction
natural narratives
computational principles
neural processing
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny