< Explain other AI papers

Reverse Thinking Makes LLMs Stronger Reasoners

Justin Chih-Yao Chen, Zifeng Wang, Hamid Palangi, Rujun Han, Sayna Ebrahimi, Long Le, Vincent Perot, Swaroop Mishra, Mohit Bansal, Chen-Yu Lee, Tomas Pfister

2024-12-02

Reverse Thinking Makes LLMs Stronger Reasoners

Summary

This paper introduces RevThink, a new framework that helps large language models (LLMs) improve their reasoning skills by incorporating reverse thinking, which is the ability to reason from a solution back to a problem.

What's the problem?

Humans often use reverse thinking to enhance their reasoning abilities, allowing them to check their answers and improve their understanding of problems. However, LLMs typically focus only on forward reasoning—from problem to solution—which can limit their effectiveness in complex reasoning tasks.

What's the solution?

RevThink addresses this issue by augmenting the training data with examples of both forward and backward reasoning. It collects structured information from a teacher model that includes the original question, the reasoning process going forward, a backward question, and the reasoning process going backward. The student model is then trained using three objectives: generating forward reasoning from a question, creating a backward question from the original question, and producing backward reasoning from that backward question. This multi-task approach helps the model learn to think in both directions.

Why it matters?

This research is important because it significantly enhances the reasoning capabilities of LLMs, making them more effective at solving complex problems. By improving how these models reason, RevThink can lead to better performance in various applications like education, AI assistance, and decision-making tools, ultimately helping users get more accurate and reliable answers.

Abstract

Reverse thinking plays a crucial role in human reasoning. Humans can reason not only from a problem to a solution but also in reverse, i.e., start from the solution and reason towards the problem. This often enhances overall reasoning performance as it enables consistency checks between their forward and backward thinking. To enable Large Language Models (LLMs) to perform reverse thinking, we introduce Reverse-Enhanced Thinking (RevThink), a framework composed of data augmentation and learning objectives. In RevThink, we augment the dataset by collecting structured forward-backward reasoning from a teacher model, consisting of: (1) the original question, (2) forward reasoning, (3) backward question, and (4) backward reasoning. We then employ three objectives to train a smaller student model in a multi-task learning fashion: (a) generate forward reasoning from a question, (b) generate a backward question from a question, and (c) generate backward reasoning from the backward question. Experiments across 12 datasets covering commonsense, math, and logical reasoning show an average 13.53% improvement over the student model's zero-shot performance and a 6.84% improvement over the strongest knowledge distillation baselines. Moreover, our method demonstrates sample efficiency -- using only 10% of the correct forward reasoning from the training data, it outperforms a standard fine-tuning method trained on 10x more forward reasoning. RevThink also exhibits strong generalization to out-of-distribution held-out datasets.