< Explain other AI papers

SPaR: Self-Play with Tree-Search Refinement to Improve Instruction-Following in Large Language Models

Jiale Cheng, Xiao Liu, Cunxiang Wang, Xiaotao Gu, Yida Lu, Dan Zhang, Yuxiao Dong, Jie Tang, Hongning Wang, Minlie Huang

2024-12-17

SPaR: Self-Play with Tree-Search Refinement to Improve Instruction-Following in Large Language Models

Summary

This paper introduces SPaR, a new method that helps large language models (LLMs) improve their ability to follow instructions accurately by using a self-play technique combined with tree-search refinement.

What's the problem?

Many existing methods for teaching LLMs to follow instructions rely on generating multiple independent responses. This can create variations in the responses that are not relevant to whether the instructions were followed correctly. As a result, these methods can confuse the model and make it harder for it to learn the important differences needed for accurate instruction-following.

What's the solution?

SPaR addresses this issue by using a self-play framework where the model plays against itself. It includes a tree-search strategy that allows the model to refine its responses based on the instructions while minimizing unnecessary variations. The process involves two roles: an actor that generates responses and a refiner that critiques and improves those responses. This iterative approach helps the model learn more effectively from its mistakes and enhances its ability to follow instructions.

Why it matters?

The SPaR method is significant because it shows that LLMs can improve their instruction-following capabilities without losing their general skills. By demonstrating better performance than existing models, it opens up new possibilities for training AI systems to understand and execute complex instructions, which is crucial for applications in various fields like customer service, education, and more.

Abstract

Instruction-following is a fundamental capability of language models, requiring the model to recognize even the most subtle requirements in the instructions and accurately reflect them in its output. Such an ability is well-suited for and often optimized by preference learning. However, existing methods often directly sample multiple independent responses from the model when creating preference pairs. Such practice can introduce content variations irrelevant to whether the instruction is precisely followed (e.g., different expressions about the same semantic), interfering with the goal of teaching models to recognize the key differences that lead to improved instruction following. In light of this, we introduce SPaR, a self-play framework integrating tree-search self-refinement to yield valid and comparable preference pairs free from distractions. By playing against itself, an LLM employs a tree-search strategy to refine its previous responses with respect to the instruction while minimizing unnecessary variations. Our experiments show that a LLaMA3-8B model, trained over three iterations guided by SPaR, surpasses GPT-4-Turbo on the IFEval benchmark without losing general capabilities. Furthermore, SPaR demonstrates promising scalability and transferability, greatly enhancing models like GLM-4-9B and LLaMA3-70B. We also identify how inference scaling in tree search would impact model performance. Our code and data are publicly available at https://github.com/thu-coai/SPaR.