< Explain other AI papers

When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoning

Nishad Singhi, Hritik Bansal, Arian Hosseini, Aditya Grover, Kai-Wei Chang, Marcus Rohrbach, Anna Rohrbach

2025-04-02

When To Solve, When To Verify: Compute-Optimal Problem Solving and
  Generative Verification for LLM Reasoning

Summary

This paper explores how to make AI models solve problems more efficiently by deciding when to spend computing power on generating many solutions versus checking the solutions they come up with.

What's the problem?

AI models need to use a lot of computing power to solve problems, and it's not clear whether it's better to generate many possible answers or to carefully check a smaller number of answers.

What's the solution?

The researchers compared different methods for solving problems, focusing on how much computing power they used. They found that generating more solutions is generally more efficient than spending a lot of effort on verifying them.

Why it matters?

This work matters because it can help us build AI models that solve problems faster and more efficiently, saving valuable computing resources.

Abstract

Scaling test-time compute has emerged as a key strategy for enhancing the reasoning capabilities of large language models (LLMs), particularly in tasks like mathematical problem-solving. A traditional approach, Self-Consistency (SC), generates multiple solutions to a problem and selects the most common answer via majority voting. Another common method involves scoring each solution with a reward model (verifier) and choosing the best one. Recent advancements in Generative Reward Models (GenRM) reframe verification as a next-token prediction task, enabling inference-time scaling along a new axis. Specifically, GenRM generates multiple verification chains-of-thought to score each solution. Under a limited inference budget, this introduces a fundamental trade-off: should you spend the budget on scaling solutions via SC or generate fewer solutions and allocate compute to verification via GenRM? To address this, we evaluate GenRM against SC under a fixed inference budget. Interestingly, we find that SC is more compute-efficient than GenRM for most practical inference budgets across diverse models and datasets. For instance, GenRM first matches SC after consuming up to 8x the inference compute and requires significantly more compute to outperform it. Furthermore, we derive inference scaling laws for the GenRM paradigm, revealing that compute-optimal inference favors scaling solution generation more aggressively than scaling the number of verifications. Our work provides practical guidance on optimizing test-time scaling by balancing solution generation and verification. The code is available at https://github.com/nishadsinghi/sc-genrm-scaling.