< Explain other AI papers

SPIN-Bench: How Well Do LLMs Plan Strategically and Reason Socially?

Jianzhu Yao, Kevin Wang, Ryan Hsieh, Haisu Zhou, Tianqing Zou, Zerui Cheng, Zhangyang Wang, Pramod Viswanath

2025-03-18

SPIN-Bench: How Well Do LLMs Plan Strategically and Reason Socially?

Summary

This paper introduces SPIN-Bench, a new way to test how well AI models can strategically plan and socially reason in various scenarios.

What's the problem?

Existing tests for AI often focus on isolated tasks, like math problems, and don't accurately measure how well AI can plan strategically or understand social interactions.

What's the solution?

The researchers created SPIN-Bench, a benchmark that includes different types of tasks like classic planning puzzles, competitive board games, cooperative card games, and negotiation scenarios. This framework tests how well AI can make decisions, understand other agents, and adapt to different social situations.

Why it matters?

This work matters because it provides a more comprehensive way to evaluate AI's ability to reason and plan in complex social environments, which is crucial for developing AI that can effectively interact with humans and solve real-world problems.

Abstract

Reasoning and strategic behavior in social interactions is a hallmark of intelligence. This form of reasoning is significantly more sophisticated than isolated planning or reasoning tasks in static settings (e.g., math problem solving). In this paper, we present Strategic Planning, Interaction, and Negotiation (SPIN-Bench), a new multi-domain evaluation designed to measure the intelligence of strategic planning and social reasoning. While many existing benchmarks focus on narrow planning or single-agent reasoning, SPIN-Bench combines classical PDDL tasks, competitive board games, cooperative card games, and multi-agent negotiation scenarios in one unified framework. The framework includes both a benchmark as well as an arena to simulate and evaluate the variety of social settings to test reasoning and strategic behavior of AI agents. We formulate the benchmark SPIN-Bench by systematically varying action spaces, state complexity, and the number of interacting agents to simulate a variety of social settings where success depends on not only methodical and step-wise decision making, but also conceptual inference of other (adversarial or cooperative) participants. Our experiments reveal that while contemporary LLMs handle basic fact retrieval and short-range planning reasonably well, they encounter significant performance bottlenecks in tasks requiring deep multi-hop reasoning over large state spaces and socially adept coordination under uncertainty. We envision SPIN-Bench as a catalyst for future research on robust multi-agent planning, social reasoning, and human--AI teaming.