< Explain other AI papers

Human-like Affective Cognition in Foundation Models

Kanishk Gandhi, Zoe Lynch, Jan-Philipp Fränken, Kayla Patterson, Sharon Wambu, Tobias Gerstenberg, Desmond C. Ong, Noah D. Goodman

2024-09-18

Human-like Affective Cognition in Foundation Models

Summary

This paper introduces a framework for evaluating how well AI models understand human emotions, called affective cognition, by comparing their responses to those of humans in various scenarios.

What's the problem?

Understanding emotions is crucial for effective human interaction, but it's unclear how well modern AI models can interpret emotions and situations like humans do. Previous studies have not systematically tested these abilities or defined the different types of emotional inferences that AI should be able to make.

What's the solution?

The researchers developed a new evaluation framework based on psychological theories that generates diverse scenarios to test AI models (like GPT-4 and Claude-3) and human participants. They created 1,280 different situations that explore the relationships between emotions, facial expressions, and outcomes. By comparing the responses of AI models to those of 567 human participants, they found that AI often matched or exceeded human agreement in predicting emotional responses. In some cases, the AI models even performed better than average humans.

Why it matters?

This research is important because it shows that advanced AI models can understand and predict human emotions similarly to people. This capability can enhance the development of more empathetic AI systems that improve interactions in applications like customer service, mental health support, and social robotics.

Abstract

Understanding emotions is fundamental to human interaction and experience. Humans easily infer emotions from situations or facial expressions, situations from emotions, and do a variety of other affective cognition. How adept is modern AI at these inferences? We introduce an evaluation framework for testing affective cognition in foundation models. Starting from psychological theory, we generate 1,280 diverse scenarios exploring relationships between appraisals, emotions, expressions, and outcomes. We evaluate the abilities of foundation models (GPT-4, Claude-3, Gemini-1.5-Pro) and humans (N = 567) across carefully selected conditions. Our results show foundation models tend to agree with human intuitions, matching or exceeding interparticipant agreement. In some conditions, models are ``superhuman'' -- they better predict modal human judgements than the average human. All models benefit from chain-of-thought reasoning. This suggests foundation models have acquired a human-like understanding of emotions and their influence on beliefs and behavior.