logo
ResearchBunny Logo
Brains and algorithms partially converge in natural language processing

Computer Science

Brains and algorithms partially converge in natural language processing

C. Caucheteux and J. King

Explore how deep learning algorithms are beginning to mirror human brain activity during language processing! Researchers Charlotte Caucheteux and Jean-Rémi King delve into the striking similarities linking computational linguistics and cognitive neuroscience, revealing how modern language models might be paving the way towards understanding natural language processing.

00:00
00:00
Playback language: English
Introduction
Deep learning algorithms, especially language transformers, have achieved remarkable success in tasks previously considered the exclusive domain of humans, such as text completion, translation, and summarization. This raises a fundamental question: do these algorithms process language similarly to the human brain? While prior neuroimaging studies have suggested partial convergence between deep language models and brain activity, the underlying computational principles driving this similarity remain unclear. These previous studies often focused on a limited number of models or subjects, and often analyzed only spatial or temporal brain responses. Furthermore, the inherent correlations between model architecture, training objectives, and training data complicate the identification of crucial factors contributing to brain-like representations. This research aims to systematically address these limitations by comprehensively comparing a diverse range of deep language models against human brain responses to sentences, recorded using both fMRI and MEG, to pinpoint the computational elements leading to brain-like representations.
Literature Review
Several neuroimaging studies have shown correlations between deep learning models and human brain responses to language. Word embeddings, dense vectors representing word meaning, have demonstrated linear mappings onto brain activity evoked by words presented in isolation or in context. Contextualized activations from language transformers have further refined these mappings, particularly in the prefrontal, temporal, and parietal cortices. Specific computations within these models, such as surprisal (word probability given context) and syntactic parsing, have also been linked to brain responses measured by ERPs and fMRI. However, existing work is fragmented, often using limited datasets (in terms of subjects or stimuli) and neglecting the temporal dynamics of brain responses. More importantly, the factors influencing a model's capacity to generate brain-like representations remain largely unknown, as previous studies have usually examined only a limited range of models.
Methodology
This study utilized fMRI and MEG data from 102 healthy adults reading 400 isolated Dutch sentences. The researchers used both fMRI and MEG recordings, with each subject participating in two one-hour sessions. They first established a shared response model across subjects to quantify the signal-to-noise ratio of brain responses to sentences, focusing on regions involved in reading, such as V1, the fusiform gyrus, superior and middle temporal gyri, pre-motor and infero-frontal cortices. A variety of deep language models were trained on the same Wikipedia corpus, varying in architecture (4-12 layers, 128-512 dimensions, 4-8 attention heads), training objective (causal language modeling (CLM) or masked language modeling (MLM)), and training data size (100 distinct training stages). Activations from these models (visual, word, and compositional embeddings) were extracted and compared to the corresponding fMRI and MEG brain responses using ℓ2-penalized regression. The accuracy of these mappings were evaluated using a brain score based on Pearson correlation. They also used a permutation feature importance analysis using a random forest model to assess the independent contribution of different model properties (architecture, training, and performance) to the generation of brain-like representations. The analysis included a wide range of models in order to control the co-variation of different parameters and to evaluate how well each parameter contributes to brain-like activations.
Key Findings
The study's key findings are threefold. First, various deep learning algorithms exhibited significant linear mappings onto brain areas associated with reading. These mappings were strongest in the middle layers of the deep language models, suggesting that intermediate representations are more brain-like than input or output layers. Second, the brain-mapping ability of algorithms primarily correlated with their capacity to predict words from context. Random embeddings (untrained networks) also showed significant brain scores, suggesting a baseline level of similarity independent of linguistic proficiency, but this correlation between brain scores and word-prediction ability was very strong. The correlation between brain score and language performance was particularly pronounced in the superior temporal sulcus and gyrus for word embeddings and widespread for middle layers. Importantly, the very best models did not always achieve the highest brain scores, suggesting potential overfitting to the prediction task or a mismatch between model objectives and brain function. Third, a permutation feature importance analysis revealed that word-prediction accuracy was the most significant factor influencing brain scores, surpassing other factors such as architecture and training data amount. Nevertheless, other factors, such as the amount of training data and architectural properties, also contributed significantly to the brain scores.
Discussion
This research significantly advances our understanding of the relationship between deep learning models and human brain processing of language. The finding that word prediction ability is the most crucial factor in achieving brain-like representations aligns with other studies in vision and audition, suggesting a general convergence between optimized deep learning models and neural processing in the brain. This convergence is, however, partial, as the most successful models in word prediction did not always show the highest brain scores, indicating that brain function may involve computations not perfectly captured by current language models. Further research is needed to fully explain these differences, which may involve the brain's recurrent architecture, smaller, more grounded training data, and more complex tasks beyond word prediction. The study also precisely maps the hierarchy of brain representations, demonstrating visual processing in V1 followed by lexical processing in the fusiform gyrus, then broader fronto-temporo-parietal networks involving word and then compositional processing. These findings suggest that deep learning models provide a valuable tool for investigating the computational underpinnings of human language processing, but also highlighting areas where further model refinement is necessary for a more complete understanding of human cognition.
Conclusion
This study demonstrates a partial convergence between deep language models and human brain activity during sentence processing. The ability of these models to map onto brain responses is strongly linked to their proficiency in predicting words from context. However, optimal language performance in the model does not guarantee the best brain mapping, suggesting a need for models that better capture the complexities of human brain computations. Future research should explore more nuanced model architectures and training paradigms that more closely reflect human learning and cognitive processes.
Limitations
The study's relatively small sample size (102 subjects) limits the generalizability of the findings. Furthermore, the use of isolated sentences, rather than continuous narratives, might not fully capture the complexity of natural language understanding. The correlation between model activations and brain activity, while significant, remains relatively low, likely reflecting the inherent noise in neuroimaging data and the complexity of brain function. Future studies using larger datasets and more sophisticated analysis techniques are needed to further refine the understanding of the relationship between artificial and biological neural networks.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny