Towards Self-Improvement of LLMs via MCTS: Leveraging Stepwise Knowledge with Curriculum Preference Learning
Xiyao Wang, Linfeng Song, Ye Tian, Dian Yu, Baolin Peng, Haitao Mi, Furong Huang, Dong Yu
2024-10-13

Summary
This paper presents AlphaLLM-CPL, a new method that helps large language models (LLMs) improve their reasoning abilities by using a technique called Monte Carlo Tree Search (MCTS).
What's the problem?
While LLMs have shown great potential in reasoning tasks, existing methods for improving their performance often do not fully utilize the detailed information generated during MCTS. This means that LLMs might not learn as effectively as they could, which limits their reasoning capabilities.
What's the solution?
The authors propose a novel training framework called AlphaLLM-CPL that allows LLMs to learn from the trajectories created by MCTS. This method includes two main innovations: first, it creates stepwise trajectory pairs from the search tree to provide detailed information for better learning; second, it uses curriculum preference learning to prioritize important learning steps during training. This helps the model focus on the most critical parts of the data and reduces the risk of overfitting, which can happen when a model learns too much from specific examples without generalizing well.
Why it matters?
This research is significant because it demonstrates a more effective way for LLMs to enhance their reasoning skills. By leveraging MCTS and focusing on stepwise learning, AlphaLLM-CPL shows that LLMs can significantly improve their performance on complex reasoning tasks, making them more useful for applications like problem-solving and decision-making in various fields.
Abstract
Monte Carlo Tree Search (MCTS) has recently emerged as a powerful technique for enhancing the reasoning capabilities of LLMs. Techniques such as SFT or DPO have enabled LLMs to distill high-quality behaviors from MCTS, improving their reasoning performance. However, existing distillation methods underutilize the rich trajectory information generated by MCTS, limiting the potential for improvements in LLM reasoning. In this paper, we propose AlphaLLM-CPL, a novel pairwise training framework that enables LLMs to self-improve through MCTS behavior distillation. AlphaLLM-CPL efficiently leverages MCTS trajectories via two key innovations: (1) AlphaLLM-CPL constructs stepwise trajectory pairs from child nodes sharing the same parent in the search tree, providing step-level information for more effective MCTS behavior distillation. (2) AlphaLLM-CPL introduces curriculum preference learning, dynamically adjusting the training sequence of trajectory pairs in each offline training epoch to prioritize critical learning steps and mitigate overfitting. Experimental results on mathematical reasoning tasks demonstrate that AlphaLLM-CPL significantly outperforms previous MCTS behavior distillation methods, substantially boosting the reasoning capabilities of LLMs.