Reasoning with Sampling: Your Base Model is Smarter Than You Think
Aayush Karan, Yilun Du
2025-10-27
Summary
This paper investigates whether powerful reasoning skills can be unlocked from large language models simply by cleverly asking the model multiple times, rather than needing to train the model further with reinforcement learning.
What's the problem?
Currently, the best performing AI systems for complex reasoning tasks use a two-step process: first, a large language model is created, and then it's further trained using reinforcement learning. It's often unclear how much of the reasoning ability comes from the initial model versus the reinforcement learning step. Researchers want to know if we can get similar reasoning abilities *without* the extra training, which is often expensive and complex.
What's the solution?
The researchers developed a method that repeatedly asks the language model the same question, but each time, the model chooses its answer based on how confident it is. It's similar to a technique used in statistics called Markov Chain Monte Carlo, where you sample from a distribution to find the most likely answers. By iteratively sampling and choosing answers the model deems most probable, they found they could significantly improve reasoning performance on tasks like math problems, coding challenges, and question answering.
Why it matters?
This work is important because it suggests we might not always need to train models with reinforcement learning to get better reasoning. If we can unlock existing capabilities within the base model through smart sampling techniques, it could make powerful AI more accessible and easier to develop, as it removes the need for complex training procedures and specialized datasets.
Abstract
Frontier reasoning models have exhibited incredible capabilities across a wide array of disciplines, driven by posttraining large language models (LLMs) with reinforcement learning (RL). However, despite the widespread success of this paradigm, much of the literature has been devoted to disentangling truly novel behaviors that emerge during RL but are not present in the base models. In our work, we approach this question from a different angle, instead asking whether comparable reasoning capabilites can be elicited from base models at inference time by pure sampling, without any additional training. Inspired by Markov chain Monte Carlo (MCMC) techniques for sampling from sharpened distributions, we propose a simple iterative sampling algorithm leveraging the base models' own likelihoods. Over different base models, we show that our algorithm offers substantial boosts in reasoning that nearly match and even outperform those from RL on a wide variety of single-shot tasks, including MATH500, HumanEval, and GPQA. Moreover, our sampler avoids the collapse in diversity over multiple samples that is characteristic of RL-posttraining. Crucially, our method does not require training, curated datasets, or a verifier, suggesting broad applicability beyond easily verifiable domains.