Game-theoretic LLM: Agent Workflow for Negotiation Games
Wenyue Hua, Ollie Liu, Lingyao Li, Alfonso Amayuelas, Julie Chen, Lucas Jiang, Mingyu Jin, Lizhou Fan, Fei Sun, William Wang, Xintong Wang, Yongfeng Zhang
2024-11-12

Summary
This paper explores how well large language models (LLMs) can make decisions in negotiation games using game theory principles, focusing on their ability to follow rational strategies.
What's the problem?
As LLMs are increasingly used for tasks that require strategic thinking, such as negotiations, it is important to evaluate how effectively they can follow complex instructions and make rational decisions. However, many LLMs often fail to use the best strategies, especially when faced with complicated scenarios or when the game's rules change. This inconsistency can lead to poor performance in real-world applications.
What's the solution?
To address these issues, the authors developed several game-theoretic workflows that help guide LLMs in their decision-making processes. These workflows improve the models' ability to find optimal strategies and compute Nash Equilibria, which are points in a game where no player can benefit by changing their strategy if others keep theirs unchanged. The paper presents experimental results showing that these workflows significantly enhance the rationality of LLMs, helping them make better choices during negotiations and reducing their chances of being exploited by others.
Why it matters?
This research is important because it not only improves how LLMs perform in strategic situations but also contributes to our understanding of their decision-making capabilities. By enhancing the rationality of these AI agents, we can develop more effective tools for negotiation and other interactive tasks, which could be beneficial in fields like business, customer service, and automated systems.
Abstract
This paper investigates the rationality of large language models (LLMs) in strategic decision-making contexts, specifically within the framework of game theory. We evaluate several state-of-the-art LLMs across a spectrum of complete-information and incomplete-information games. Our findings reveal that LLMs frequently deviate from rational strategies, particularly as the complexity of the game increases with larger payoff matrices or deeper sequential trees. To address these limitations, we design multiple game-theoretic workflows that guide the reasoning and decision-making processes of LLMs. These workflows aim to enhance the models' ability to compute Nash Equilibria and make rational choices, even under conditions of uncertainty and incomplete information. Experimental results demonstrate that the adoption of these workflows significantly improves the rationality and robustness of LLMs in game-theoretic tasks. Specifically, with the workflow, LLMs exhibit marked improvements in identifying optimal strategies, achieving near-optimal allocations in negotiation scenarios, and reducing susceptibility to exploitation during negotiations. Furthermore, we explore the meta-strategic considerations of whether it is rational for agents to adopt such workflows, recognizing that the decision to use or forgo the workflow constitutes a game-theoretic issue in itself. Our research contributes to a deeper understanding of LLMs' decision-making capabilities in strategic contexts and provides insights into enhancing their rationality through structured workflows. The findings have implications for the development of more robust and strategically sound AI agents capable of navigating complex interactive environments. Code and data supporting this study are available at https://github.com/Wenyueh/game_theory.