Large Language Models (LLMs) have demonstrated remarkable proficiency in human language, yet their cognitive capabilities remain a subject of debate. This paper evaluates LLMs using a distinction between formal linguistic competence (knowledge of linguistic rules) and functional linguistic competence (using language in real-world contexts). Drawing from human neuroscience, the authors posit that human-like language use requires mastery of both. While LLMs excel at formal competence, their functional competence is inconsistent, often requiring specialized fine-tuning or external modules. The authors argue that future LLMs need distinct mechanisms for both competence types, mirroring the human brain's modularity.
Publisher
A preprint
Published On
Apr 14, 2024
Authors
Kyle Mahowald, Idan A. Blank, Joshua B. Tenenbaum, Anna A. Ivanova, Nancy Kanwisher, Evelina Fedorenko
Tags
Large Language Models
linguistic competence
functional competence
human neuroscience
cognitive capabilities
modularity
fine-tuning
Related Publications
Explore these studies to deepen your understanding of the subject.