Recent advancements in deep learning algorithms, particularly language transformers, have demonstrated remarkable capabilities in processing natural language. This study investigates the computational principles underlying the similarity between these algorithms and human brain activity during sentence processing. By analyzing fMRI and MEG data from 102 subjects reading Dutch sentences, and comparing these responses to the activations of a wide variety of deep language models, the researchers found that the similarity primarily depends on the algorithm's ability to predict words from context. This similarity also reveals the hierarchical development of perceptual, lexical, and compositional representations within cortical regions. The study concludes that modern language algorithms partially converge towards brain-like solutions, suggesting a promising path to understanding the foundations of natural language processing.
Publisher
Communications Biology
Published On
Feb 16, 2022
Authors
Charlotte Caucheteux, Jean-Rémi King
Tags
deep learning
language models
human brain activity
sentence processing
fMRI
MEG
predictive algorithms
Related Publications
Explore these studies to deepen your understanding of the subject.