Patience Is The Key to Large Language Model Reasoning
Yijiong Yu
2024-11-22

Summary
This paper discusses how to improve the reasoning abilities of large language models (LLMs) by encouraging them to take their time and provide detailed answers instead of rushing to simple conclusions.
What's the problem?
Many LLMs struggle with complex reasoning tasks because they often prioritize quick responses over thorough thinking. This can lead to incorrect answers, especially in complicated situations where detailed reasoning is necessary. Additionally, training these models to reason well usually requires a lot of data and resources, which can be expensive.
What's the solution?
The authors propose a method that encourages LLMs to adopt a 'patient' reasoning style. They generate examples where the model provides detailed reasoning processes as positive examples and simpler, less detailed answers as negative examples. By training the model to prefer thoroughness, they improve its ability to solve complex problems without needing extensive new training data. Their experiments show that this approach can increase performance on reasoning tasks significantly.
Why it matters?
This research is important because it offers a simpler way to enhance the reasoning capabilities of LLMs, making them more effective at solving complex problems. By improving how these models think through questions, we can create AI systems that provide more accurate and reliable answers, which is crucial for applications in education, healthcare, and many other fields.
Abstract
Recent advancements in the field of large language models, particularly through the Chain of Thought (CoT) approach, have demonstrated significant improvements in solving complex problems. However, existing models either tend to sacrifice detailed reasoning for brevity due to user preferences, or require extensive and expensive training data to learn complicated reasoning ability, limiting their potential in solving complex tasks. To bridge this gap, following the concept of scaling test-time, we propose a simple method by encouraging models to adopt a more patient reasoning style without the need of introducing new knowledge or skills. To employ a preference optimization approach, we generate detailed reasoning processes as positive examples and simple answers as negative examples, thereby training the model to favor thoroughness in its responses. Our results demonstrate a performance increase of up to 6.7% on GSM8k with training just on a lightweight dataset.