logo
ResearchBunny Logo
Abstract
Large Language Models (LLMs) have demonstrated remarkable proficiency in human language, yet their cognitive capabilities remain a subject of debate. This paper evaluates LLMs using a distinction between formal linguistic competence (knowledge of linguistic rules) and functional linguistic competence (using language in real-world contexts). Drawing from human neuroscience, the authors posit that human-like language use requires mastery of both. While LLMs excel at formal competence, their functional competence is inconsistent, often requiring specialized fine-tuning or external modules. The authors argue that future LLMs need distinct mechanisms for both competence types, mirroring the human brain's modularity.
Publisher
A preprint
Published On
Apr 14, 2024
Authors
Kyle Mahowald, Idan A. Blank, Joshua B. Tenenbaum, Anna A. Ivanova, Nancy Kanwisher, Evelina Fedorenko
Tags
Large Language Models
linguistic competence
functional competence
human neuroscience
cognitive capabilities
modularity
fine-tuning
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny