Do LLMs Feel? Teaching Emotion Recognition with Prompts, Retrieval, and Curriculum Learning
Xinran Li, Xiujuan Xu, Jiaqi Qiao, Yu Liu
2025-11-11
Summary
This paper focuses on improving how well computers can understand emotions expressed during conversations, a field called Emotion Recognition in Conversation. It explores ways to make large language models, which are powerful AI systems, better at recognizing both obvious and subtle emotional cues in dialogue.
What's the problem?
While large language models are getting good at many language tasks, they still struggle to fully grasp the emotional nuances within a conversation. They often miss the connections between what someone *says* and what they *feel*, especially when emotions aren't directly stated. Essentially, current models aren't great at reading between the lines emotionally.
What's the solution?
The researchers developed a new training method called PRC-Emo. This method combines three key techniques: carefully designed prompts that guide the model to focus on emotional clues, a collection of example conversations to help the model learn from real and AI-generated data, and a learning strategy that gradually increases the difficulty of the conversations the model analyzes. They specifically focused on how emotional shifts change when the same person versus different people are speaking.
Why it matters?
This research is important because accurately understanding emotions in conversations is crucial for creating more natural and helpful human-computer interactions. Better emotion recognition can lead to AI assistants that are more empathetic, chatbots that provide more relevant responses, and overall more seamless communication between people and machines. The new method achieved top results on standard tests, showing it's a significant step forward in the field.
Abstract
Emotion Recognition in Conversation (ERC) is a crucial task for understanding human emotions and enabling natural human-computer interaction. Although Large Language Models (LLMs) have recently shown great potential in this field, their ability to capture the intrinsic connections between explicit and implicit emotions remains limited. We propose a novel ERC training framework, PRC-Emo, which integrates Prompt engineering, demonstration Retrieval, and Curriculum learning, with the goal of exploring whether LLMs can effectively perceive emotions in conversational contexts. Specifically, we design emotion-sensitive prompt templates based on both explicit and implicit emotional cues to better guide the model in understanding the speaker's psychological states. We construct the first dedicated demonstration retrieval repository for ERC, which includes training samples from widely used datasets, as well as high-quality dialogue examples generated by LLMs and manually verified. Moreover, we introduce a curriculum learning strategy into the LoRA fine-tuning process, incorporating weighted emotional shifts between same-speaker and different-speaker utterances to assign difficulty levels to dialogue samples, which are then organized in an easy-to-hard training sequence. Experimental results on two benchmark datasets-- IEMOCAP and MELD --show that our method achieves new state-of-the-art (SOTA) performance, demonstrating the effectiveness and generalizability of our approach in improving LLM-based emotional understanding.