logo
ResearchBunny Logo
A deep learning framework for gender sensitive speech emotion recognition based on MFCC feature selection and SHAP analysis

Computer Science

A deep learning framework for gender sensitive speech emotion recognition based on MFCC feature selection and SHAP analysis

Q. Hu, Y. Peng, et al.

A powerful new deep-learning approach dramatically boosts speech emotion recognition, improving accuracy by up to 15% over prior methods and enabling real-time analysis for applications like live TV audience monitoring. Research conducted by the authors listed in the <Authors> tag: Qingqing Hu, Yiran Peng, and Zhong Zheng, showcases CNN and LSTM-driven models that decode emotions such as happiness, sadness, anger, fear, surprise, and neutrality.

00:00
00:00
~3 min • Beginner • English
Citation Metrics
Citations
1
Influential Citations
0
Reference Count
47

Note: The citation metrics presented here have been sourced from Semantic Scholar and OpenAlex.

Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny