Advances in deep learning have led to the development of predictive deep language models (DLMs). This study investigates whether the human brain and autoregressive DLMs share computational principles when processing natural narratives. Nine participants underwent electrocorticography (ECOG) while listening to a podcast. The findings show that both the human brain and DLMs engage in continuous next-word prediction, match pre-onset predictions to incoming words to calculate surprise, and use contextual embeddings to represent words. This suggests that autoregressive DLMs offer a biologically plausible framework for understanding language neural processing.
Publisher
Nature Neuroscience
Published On
Mar 07, 2022
Authors
Ariel Goldstein, Zaid Zada, Eliav Buchnik, Mariano Schain, Amy Price, Bobbi Aubrey, Samuel A. Nastase, Amir Feder, Dotan Emanuel, Alon Cohen, Aren Jansen, Harshvardhan Gazula, Gina Choe, Aditi Rao, Catherine Kim, Colton Casto, Lora Devore, Adeen Flinker, Liat Hasenfratz, Omer Levy, Avinatan Hassidim, Sasha Fanda, Werner Doyle, Daniel Friedman, Patricia Dugan, Lucia Melloni, Roi Reichart, Michael Brenner, Yossi Matias, Kenneth A. Norman, Orrin Devinsky, Uri Hasson
Tags
deep learning
language models
human brain
next-word prediction
natural narratives
computational principles
neural processing
Related Publications
Explore these studies to deepen your understanding of the subject.