A Stitch in Time Saves Nine: Proactive Self-Refinement for Language Models
Jinyi Han, Xinyi Wang, Haiquan Zhao, Tingyun li, Zishang Jiang, Sihang Jiang, Jiaqing Liang, Xin Lin, Weikang Zhou, Zeye Sun, Fei Yu, Yanghua Xiao
2025-08-20

Summary
This paper introduces a new way for AI language models to improve their own writing as they generate it, called ProActive Self-Refinement (PASR), which is shown to be more efficient and accurate than previous methods.
What's the problem?
Existing AI language models often fix their mistakes after they've written something, like starting over, and they don't always know the best time to make those fixes. This can lead to wasted effort and not-so-great results.
What's the solution?
The researchers developed ProActive Self-Refinement (PASR), a method where the AI model intelligently decides when and how to change its writing during the writing process itself, instead of waiting until the end or just rewriting everything. It's like a writer revising their thoughts as they go.
Why it matters?
This advancement is important because it makes AI language models smarter and more efficient, leading to better problem-solving abilities and reduced wasted computational resources, which is a big deal for making AI more practical.
Abstract
Recent advances in self-refinement have demonstrated significant potential for improving the outputs of large language models (LLMs) through iterative refinement. However, most existing self-refinement methods rely on a reactive process with a fixed number of iterations, making it difficult to determine the optimal timing and content of refinement based on the evolving generation context. Inspired by the way humans dynamically refine their thoughts during execution, we propose ProActive Self-Refinement (PASR), a novel method that enables LLMs to refine their outputs during the generation process. Unlike methods that regenerate entire responses, PASR proactively decides whether, when, and how to refine based on the model's internal state and evolving context. We conduct extensive experiments on a diverse set of 10 tasks to evaluate the effectiveness of PASR. Experimental results show that PASR significantly enhances problem-solving performance. In particular, on Qwen3-8B, PASR reduces average token consumption by 41.6 percent compared to standard generation, while also achieving an 8.2 percent improvement in accuracy. Our code and all baselines used in the paper are available in the GitHub.