Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents
Wenxuan Ding, Nicholas Tomlin, Greg Durrett
2026-02-20
Summary
This paper explores how to make large language models, or LLMs, better at tasks that require them to interact with the real world and learn as they go, like writing code or searching for information.
What's the problem?
LLMs often struggle with situations where they need to decide when to stop gathering information and give an answer. There's a balance between the cost of getting more information and the risk of being wrong. For example, when writing code, it's better to test it if you're unsure, but testing takes time and effort. LLMs don't naturally consider this cost-benefit tradeoff, leading to inefficient or incorrect solutions.
What's the solution?
The researchers developed a method called Calibrate-Then-Act, or CTA. This method gives the LLM extra information about the uncertainty of its knowledge and the cost of getting more information. Essentially, it helps the LLM 'think' about whether it's worth it to explore further before committing to an answer. They framed tasks like information retrieval and coding as a series of decisions the LLM has to make, and CTA helps it make those decisions more strategically. This improvement held up even when the LLM was further trained using reinforcement learning.
Why it matters?
This work is important because it makes LLMs more practical for real-world applications. By teaching LLMs to weigh costs and benefits, they can solve complex problems more efficiently and accurately, ultimately leading to more reliable and helpful AI systems.
Abstract
LLMs are increasingly being used for complex problems which are not necessarily resolved in a single response, but require interacting with an environment to acquire information. In these scenarios, LLMs must reason about inherent cost-uncertainty tradeoffs in when to stop exploring and commit to an answer. For instance, on a programming task, an LLM should test a generated code snippet if it is uncertain about the correctness of that code; the cost of writing a test is nonzero, but typically lower than the cost of making a mistake. In this work, we show that we can induce LLMs to explicitly reason about balancing these cost-uncertainty tradeoffs, then perform more optimal environment exploration. We formalize multiple tasks, including information retrieval and coding, as sequential decision-making problems under uncertainty. Each problem has latent environment state that can be reasoned about via a prior which is passed to the LLM agent. We introduce a framework called Calibrate-Then-Act (CTA), where we feed the LLM this additional context to enable it to act more optimally. This improvement is preserved even under RL training of both the baseline and CTA. Our results on information-seeking QA and on a simplified coding task show that making cost-benefit tradeoffs explicit with CTA can help agents discover more optimal decision-making strategies.