logo
ResearchBunny Logo
Abstract
This study investigates how the brain efficiently extracts meaning from speech despite acoustic variability. Using MEG data during audiobook listening, the researchers explored the interplay between structural (syntactic) and statistical language knowledge in shaping neural dynamics (phase and amplitude modulation). Syntactic features and statistical cues (from a transformer model) were used to predict neural activity. The findings reveal a joint contribution of both types of linguistic information to neural signal reconstruction, with syntactic features exhibiting a more temporally dispersed impact. Word entropy and the number of closing syntactic constituents were linked to phase-amplitude coupling, suggesting their roles in temporal prediction and cortical oscillation alignment during speech processing. The study concludes that structured and statistical information synergistically shape neural dynamics in spoken language comprehension via a cross-frequency coupling mechanism.
Publisher
Nature Communications
Published On
Oct 14, 2024
Authors
Hugo Weissbart, Andrea E. Martin
Tags
speech processing
neural dynamics
syntactic features
statistical language knowledge
MEG data
cognitive neuroimaging
cross-frequency coupling
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny