logo
ResearchBunny Logo
Affect Recognition in Conversations Using Large Language Models

Computer Science

Affect Recognition in Conversations Using Large Language Models

S. Feng, G. Sun, et al.

This study, by Shutong Feng, Guangzhi Sun, Nurul Lubis, Wen Wu, Chao Zhang, and Milica Gašić, evaluates large language models' ability to recognise emotions, moods, and feelings across open-domain and task-oriented dialogues using IEMOCAP, EmoWOZ, and DAIC-WOZ, and explores zero-shot, few-shot, fine-tuning, and the impact of ASR errors on LLM predictions.

00:00
00:00
~3 min • Beginner • English
Abstract
Affect recognition, encompassing emotions, moods, and feelings, plays a pivotal role in human communication. In the realm of conversational artificial intelligence, the ability to discern and respond to human affective cues is a critical factor for creating engaging and empathetic interactions. This study investigates the capacity of large language models (LLMs) to recognise human affect in conversations, with a focus on both open-domain chit-chat dialogues and task-oriented dialogues. Leveraging three diverse datasets, namely IEMOCAP (Busso et al., 2008), EmoWOZ (Feng et al., 2022), and DAIC-WOZ (Gratch et al., 2014), covering a spectrum of dialogues from casual conversations to clinical interviews, we evaluate and compare LLMs’ performance in affect recognition. Our investigation explores the zero-shot and few-shot capabilities of LLMs through in-context learning as well as their model capacities through task-specific fine-tuning. Additionally, this study takes into account the potential impact of automatic speech recognition errors on LLM predictions. With this work, we aim to shed light on the extent to which LLMs can replicate human-like affect recognition capabilities in conversations.
Publisher
Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Published On
Sep 18, 2024
Authors
Shutong Feng, Guangzhi Sun, Nurul Lubis, Wen Wu, Chao Zhang, Milica Gašić
Tags
Affect recognition
Large language models
Conversational AI
In-context learning
Fine-tuning
IEMOCAP / EmoWOZ / DAIC-WOZ
Automatic speech recognition errors
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny