< Explain other AI papers

Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search

Jonathan Light, Min Cai, Weiqin Chen, Guanzhi Wang, Xiusi Chen, Wei Cheng, Yisong Yue, Ziniu Hu

2024-08-23

Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search

Summary

This paper introduces a method called Strategist, which helps AI systems learn strategic skills for playing multi-agent games by using feedback from self-play simulations and reflection.

What's the problem?

AI systems often struggle to develop effective strategies in complex games because they lack the ability to learn from their own experiences in a meaningful way. Traditional methods may not provide the necessary feedback for improvement, leading to less effective gameplay.

What's the solution?

The Strategist method uses Large Language Models (LLMs) to simulate games against themselves, gathering quality feedback through a technique called Monte Carlo tree search. This feedback helps the AI learn high-level strategies, guiding its decisions and actions during gameplay. The method has been tested in various games and shows better performance compared to older techniques.

Why it matters?

This research is important because it demonstrates how AI can improve its strategic thinking in games, which could be applied to real-world situations like decision-making in business or other complex environments. By enhancing the learning process of AI, we can create smarter systems that can adapt and perform better over time.

Abstract

In this paper, we propose a new method Strategist that utilizes LLMs to acquire new skills for playing multi-agent games through a self-improvement process. Our method gathers quality feedback through self-play simulations with Monte Carlo tree search and LLM-based reflection, which can then be used to learn high-level strategic skills such as how to evaluate states that guide the low-level execution.We showcase how our method can be used in both action planning and dialogue generation in the context of games, achieving good performance on both tasks. Specifically, we demonstrate that our method can help train agents with better performance than both traditional reinforcement learning-based approaches and other LLM-based skill learning approaches in games including the Game of Pure Strategy (GOPS) and The Resistance: Avalon.