< Explain other AI papers

Budget-aware Test-time Scaling via Discriminative Verification

Kyle Montgomery, Sijun Tan, Yuqi Chen, Siyuan Zhuang, Tianjun Zhang, Raluca Ada Popa, Chenguang Wang

2025-10-17

Budget-aware Test-time Scaling via Discriminative Verification

Summary

This paper explores ways to improve the accuracy of large language models, specifically when they're tackling difficult problems that require careful reasoning.

What's the problem?

Currently, a popular method to boost accuracy involves having another AI 'verify' the initial answer generated by the first AI. However, this verification step, when done by a generative AI, is very computationally expensive, meaning it requires a lot of processing power and time, making it impractical for many real-world uses.

What's the solution?

The researchers investigated a different type of verification – discriminative verification – which is less demanding on computing resources. They found that while discriminative verification isn't great on its own, combining it with a technique called 'self-consistency' (where the AI generates multiple answers and picks the most common one) creates a surprisingly effective and efficient system. This combined approach actually outperformed the more expensive generative verification method, achieving better results on a challenging reasoning benchmark.

Why it matters?

This research demonstrates that you can significantly improve the performance of large language models without needing massive amounts of computing power. It offers a practical and cost-effective way to make these powerful AI tools more accessible and useful for everyday applications, showing that a 'smarter' approach to verification can be better than simply throwing more resources at the problem.

Abstract

Test-time scaling is a powerful strategy for boosting the performance of large language models on complex reasoning tasks. While state-of-the-art approaches often employ generative verifiers to select the best solution from a pool of candidates, this method incurs prohibitive computational costs, limiting its practicality. In this work, we shift the focus to a more budget-aware paradigm: discriminative verification. We conduct a thorough empirical analysis and demonstrate that while discriminative verifiers may underperform in isolation, combining them with self-consistency in a hybrid approach creates a powerful and efficient test-time scaling mechanism. Notably, under a fixed compute budget, this hybrid approach surpasses state-of-the-art generative verification by a significant margin: achieving up to 15.3\% higher accuracy on AIME2025. Our findings establish that for practical, real-world applications, budget-aware scaling with discriminative verifiers is not only a "free" upgrade over self-consistency, but also a more effective and efficient alternative to costly generative techniques. Code is available at https://github.com/wang-research-lab/verification.