logo
ResearchBunny Logo
A neural speech decoding framework leveraging deep learning and speech synthesis

Engineering and Technology

A neural speech decoding framework leveraging deep learning and speech synthesis

X. Chen, R. Wang, et al.

Discover groundbreaking advancements in brain-computer interface technology! This research by Xupeng Chen, Ran Wang, and their colleagues introduces a pioneering deep learning framework that decodes human speech from neural signals, paving the way for speech restoration in individuals with neurological deficits.

00:00
00:00
~3 min • Beginner • English
Abstract
Decoding human speech from neural signals is essential for brain-computer interface (BCI) technologies that aim to restore speech in populations with neurological deficits. However, it remains a highly challenging task, compounded by the scarce availability of neural signals with corresponding speech, data complexity and high dimensionality. Here we present a novel deep learning-based neural speech decoding framework that includes an ECOG decoder that translates electrocorticographic (ECOG) signals from the cortex into interpretable speech parameters and a novel differentiable speech synthesizer that maps speech parameters to spectrograms. We have developed a companion speech-to-speech auto-encoder consisting of a speech encoder and the same speech synthesizer to generate reference speech parameters to facilitate the ECOG decoder training. This framework generates natural-sounding speech and is highly reproducible across a cohort of 48 participants. Our experimental results show that our models can decode speech with high correlation, even when limited to only causal operations, which is necessary for adoption by real-time neural prostheses. Finally, we successfully decode speech in participants with either left or right hemisphere coverage, which could lead to speech prostheses in patients with deficits resulting from left hemisphere damage.
Publisher
Nature Machine Intelligence
Published On
Apr 08, 2024
Authors
Xupeng Chen, Ran Wang, Amirhossein Khalilian-Gourtani, Leyao Yu, Patricia Dugan, Daniel Friedman, Werner Doyle, Orrin Devinsky, Yao Wang, Adeen Flinker
Tags
brain-computer interface
speech decoding
deep learning
neural signals
electrocorticographic signals
speech parameters
real-time prostheses
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny