logo
ResearchBunny Logo
Decoding speech perception from non-invasive brain recordings

Linguistics and Languages

Decoding speech perception from non-invasive brain recordings

A. Défossez, C. Caucheteux, et al.

This innovative research by Alexandre Défossez, Charlotte Caucheteux, Jérémy Rapin, Ori Kabeli, and Jean-Rémi King unveils a cutting-edge contrastive learning model that decodes speech perception with remarkable accuracy from non-invasive MEG and EEG recordings. With up to 41% accuracy and 80% in the best cases, this study promises a revolutionary approach to understanding language processing.

00:00
00:00
Playback language: English
Abstract
This paper introduces a contrastive learning model to decode speech perception from non-invasive MEG and EEG recordings. Using four public datasets (175 volunteers), the model achieves up to 41% accuracy in identifying the corresponding speech segment from 3 seconds of MEG signals, reaching 80% in the best participants. This performance enables decoding words and phrases unseen during training, highlighting the importance of contrastive learning, pretrained speech representations, and a multi-participant convolutional architecture. Analysis suggests the decoder relies on lexical and contextual semantic representations, offering a promising non-invasive approach for language decoding.
Publisher
Nature Machine Intelligence
Published On
Oct 05, 2023
Authors
Alexandre Défossez, Charlotte Caucheteux, Jérémy Rapin, Ori Kabeli, Jean-Rémi King
Tags
contrastive learning
speech perception
MEG
EEG
language decoding
non-invasive
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny