logo
ResearchBunny Logo
Abstract
Advances in deep learning have led to the development of predictive deep language models (DLMs). This study investigates whether the human brain and autoregressive DLMs share computational principles when processing natural narratives. Nine participants underwent electrocorticography (ECOG) while listening to a podcast. The findings show that both the human brain and DLMs engage in continuous next-word prediction, match pre-onset predictions to incoming words to calculate surprise, and use contextual embeddings to represent words. This suggests that autoregressive DLMs offer a biologically plausible framework for understanding language neural processing.
Publisher
Nature Neuroscience
Published On
Mar 07, 2022
Authors
Ariel Goldstein, Zaid Zada, Eliav Buchnik, Mariano Schain, Amy Price, Bobbi Aubrey, Samuel A. Nastase, Amir Feder, Dotan Emanuel, Alon Cohen, Aren Jansen, Harshvardhan Gazula, Gina Choe, Aditi Rao, Catherine Kim, Colton Casto, Lora Devore, Adeen Flinker, Liat Hasenfratz, Omer Levy, Avinatan Hassidim, Sasha Fanda, Werner Doyle, Daniel Friedman, Patricia Dugan, Lucia Melloni, Roi Reichart, Michael Brenner, Yossi Matias, Kenneth A. Norman, Orrin Devinsky, Uri Hasson
Tags
deep learning
language models
human brain
next-word prediction
natural narratives
computational principles
neural processing
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny