Engineering and Technology
A neural speech decoding framework leveraging deep learning and speech synthesis
X. Chen, R. Wang, et al.
This study introduces a novel deep learning neural speech-decoding framework that pairs an ECoG decoder with a differentiable speech synthesizer and a speech-to-speech auto-encoder to produce interpretable speech parameters and natural-sounding speech. The approach is highly reproducible across 48 participants, yields high correlation even with causal operations suitable for real-time prostheses, and works with left or right hemisphere coverage. This research was conducted by Xupeng Chen, Ran Wang, Amirhossein Khalilian-Gourtani, Leyao Yu, Patricia Dugan, Daniel Friedman, Werner Doyle, Orrin Devinsky, Yao Wang, and Adeen Flinker.
Related Publications
Explore these studies to deepen your understanding of the subject.

