This study investigates how the brain efficiently extracts meaning from speech despite acoustic variability. Using MEG data during audiobook listening, the researchers explored the interplay between structural (syntactic) and statistical language knowledge in shaping neural dynamics (phase and amplitude modulation). Syntactic features and statistical cues (from a transformer model) were used to predict neural activity. The findings reveal a joint contribution of both types of linguistic information to neural signal reconstruction, with syntactic features exhibiting a more temporally dispersed impact. Word entropy and the number of closing syntactic constituents were linked to phase-amplitude coupling, suggesting their roles in temporal prediction and cortical oscillation alignment during speech processing. The study concludes that structured and statistical information synergistically shape neural dynamics in spoken language comprehension via a cross-frequency coupling mechanism.
Publisher
Nature Communications
Published On
Oct 14, 2024
Authors
Hugo Weissbart, Andrea E. Martin
Tags
speech processing
neural dynamics
syntactic features
statistical language knowledge
MEG data
cognitive neuroimaging
cross-frequency coupling
Related Publications
Explore these studies to deepen your understanding of the subject.