
Computer Science
A deep learning framework for gender sensitive speech emotion recognition based on MFCC feature selection and SHAP analysis
Q. Hu, Y. Peng, et al.
A powerful new deep-learning approach dramatically boosts speech emotion recognition, improving accuracy by up to 15% over prior methods and enabling real-time analysis for applications like live TV audience monitoring. Research conducted by the authors listed in the <Authors> tag: Qingqing Hu, Yiran Peng, and Zhong Zheng, showcases CNN and LSTM-driven models that decode emotions such as happiness, sadness, anger, fear, surprise, and neutrality.
~3 min • Beginner • English
Related Publications
Explore these studies to deepen your understanding of the subject.