Humans expect rationality and cooperation from LLM opponents in strategic games
Darija Barak, Miguel Costa-Gomes
2025-05-19
Summary
This paper talks about how people behave differently when playing strategy games against large language models (LLMs) compared to when they play against other humans, especially in terms of expecting the AI to act rationally and cooperate.
What's the problem?
The problem is that we don't fully understand how humans change their strategies or expectations when they know they're playing against an AI, which could affect the fairness and outcomes of games and other interactions.
What's the solution?
The researchers studied how people played a game called the p-beauty contest against LLMs and found that humans tended to pick lower numbers, expecting the AI to make logical and cooperative choices, which is different from how they might play against other people.
Why it matters?
This matters because it shows that people treat AI opponents differently, which could impact how we design AI for games, negotiations, or any situation where humans and AI interact strategically.
Abstract
Human subjects in a p-beauty contest choose lower numbers when playing against LLMs, driven by zero Nash-equilibrium choices, which highlights differences in interaction and strategic reasoning.