< Explain other AI papers

The Sequential Edge: Inverse-Entropy Voting Beats Parallel Self-Consistency at Matched Compute

Aman Sharma, Paras Chopra

2025-11-06

The Sequential Edge: Inverse-Entropy Voting Beats Parallel Self-Consistency at Matched Compute

Summary

This paper investigates the best way to get language models to perform complex reasoning tasks when you have a limited amount of computing power available during use, a process called 'test-time scaling'.

What's the problem?

Currently, a popular method called 'self-consistency' involves running the language model multiple times in parallel, generating different answers, and then picking the most common one. The question this paper addresses is whether it's more effective to use the same computing power to generate a few, longer chains of thought where each step builds on the previous one, instead of many independent, shorter attempts.

What's the solution?

The researchers found that consistently building upon previous reasoning steps – sequential scaling – significantly outperformed the parallel self-consistency method in almost all cases, improving accuracy by up to 46.7%. They also developed a new technique called 'inverse-entropy weighted voting' which further improves accuracy by giving more weight to answers that come from reasoning chains that were more confident and focused. This method essentially prioritizes answers that had a clear line of reasoning.

Why it matters?

This research challenges the widely accepted idea that parallel reasoning is the best approach for improving language model performance. It suggests that a sequential, iterative approach is more robust and effective, meaning we should rethink how we optimize language models for reasoning tasks and focus on methods that allow them to refine their thinking step-by-step.

Abstract

We revisit test-time scaling for language model reasoning and ask a fundamental question: at equal token budget and compute, is it better to run multiple independent chains in parallel, or to run fewer chains that iteratively refine through sequential steps? Through comprehensive evaluation across 5 state-of-the-art open source models and 3 challenging reasoning benchmarks, we find that sequential scaling where chains explicitly build upon previous attempts consistently outperforms the dominant parallel self-consistency paradigm in 95.6% of configurations with gains in accuracy upto 46.7%. Further, we introduce inverse-entropy weighted voting, a novel training-free method to further boost the accuracy of sequential scaling. By weighing answers in proportion to the inverse entropy of their reasoning chains, we increase our success rate over parallel majority and establish it as the optimal test-time scaling strategy. Our findings fundamentally challenge the parallel reasoning orthodoxy that has dominated test-time scaling since Wang et al.'s self-consistency decoding (Wang et al., 2022), positioning sequential refinement as the robust default for modern LLM reasoning and necessitating a paradigm shift in how we approach inference-time optimization.