This study investigates how the human brain sequences speech signals to recognize words. Using magnetoencephalograms (MEGs) from 21 participants listening to narratives, the researchers found that the brain continuously encodes the three most recent speech sounds, maintaining this information beyond sensory input dissipation. Each sound representation evolves, encoding phonetic features and time elapsed since onset, thus representing order and content. These representations are active earlier with predictable phonemes and sustained longer with uncertain lexical identity. The findings offer insight into the intermediary representations between sensory input and sub-lexical units, highlighting the flexibility of neural representations in speech processing.
Publisher
Nature Communications
Published On
Nov 03, 2022
Authors
Laura Gwilliams, Jean-Remi King, Alec Marantz, David Poeppel
Tags
speech processing
word recognition
magnetoencephalograms
neural representations
phonetic features
Related Publications
Explore these studies to deepen your understanding of the subject.