< Explain other AI papers

DeepPrune: Parallel Scaling without Inter-trace Redundancy

Shangqing Tu, Yaxuan Li, Yushi Bai, Lei Hou, Juanzi Li

2025-10-10

DeepPrune: Parallel Scaling without Inter-trace Redundancy

Summary

This paper introduces a new method called DeepPrune to make large language models (LLMs) reason more efficiently when using a technique called parallel scaling. Parallel scaling involves the LLM thinking through a problem in multiple ways at the same time, but it often ends up repeating itself.

What's the problem?

When LLMs use parallel scaling to solve complex problems, they generate many different 'lines of thought' to arrive at an answer. However, the researchers found that a huge amount of this work is redundant – over 80% of the time, these different thought processes lead to the *same* final answer. This means a lot of computing power is wasted on doing the same thing over and over again, making the process slow and expensive.

What's the solution?

DeepPrune tackles this inefficiency by dynamically stopping redundant reasoning paths. It uses a special 'judge' model, trained to predict if different reasoning paths will likely end up with the same answer, even if they haven't finished yet. This judge model is really good at spotting equivalent answers. Then, an algorithm uses this information to prune away the paths that are likely to be duplicates, keeping only the diverse ones. This significantly reduces the amount of computation needed without sacrificing accuracy.

Why it matters?

This work is important because it makes powerful reasoning with LLMs much more practical. By dramatically reducing the computational cost – by over 80% in many cases – DeepPrune allows for more complex problems to be solved efficiently, bringing high-performance reasoning within reach for a wider range of applications and resources. It sets a new benchmark for how to do parallel reasoning effectively.

Abstract

Parallel scaling has emerged as a powerful paradigm to enhance reasoning capabilities in large language models (LLMs) by generating multiple Chain-of-Thought (CoT) traces simultaneously. However, this approach introduces significant computational inefficiency due to inter-trace redundancy -- our analysis reveals that over 80% of parallel reasoning traces yield identical final answers, representing substantial wasted computation. To address this critical efficiency bottleneck, we propose DeepPrune, a novel framework that enables efficient parallel scaling through dynamic pruning. Our method features a specialized judge model trained with focal loss and oversampling techniques to accurately predict answer equivalence from partial reasoning traces which realizes 0.87 AUROC on equivalence prediction, combined with an online greedy clustering algorithm that dynamically prunes redundant paths while preserving answer diversity. Comprehensive evaluations across three challenging benchmarks (AIME 2024, AIME 2025, and GPQA) and multiple reasoning models demonstrate that DeepPrune achieves remarkable token reduction by over 80% compared to conventional consensus sampling on most cases, while maintaining competitive accuracy within 3 percentage points. Our work establishes a new standard for efficient parallel reasoning, making high-performance reasoning more efficient. Our code and data are here: https://deepprune.github.io/