The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs
Akshit Sinha, Arvindh Arun, Shashwat Goel, Steffen Staab, Jonas Geiping
2025-09-15
Summary
This paper investigates whether making large language models (LLMs) bigger and bigger continues to improve their performance, specifically when it comes to tackling tasks that require many steps to complete.
What's the problem?
LLMs are surprisingly bad at completing even simple tasks if those tasks require a long sequence of steps. It's not that they can't *think* through the problem, but rather they make mistakes as they try to *do* the steps. The issue isn't just about the models getting confused with long inputs; they actually seem to get worse when they see their own previous errors, a phenomenon called 'self-conditioning'. Simply making the model larger doesn't fix this self-conditioning problem.
What's the solution?
The researchers focused on the model's ability to *execute* a plan, rather than its ability to come up with the plan itself. They gave the models the knowledge and the steps needed to solve a problem, and then tested how many steps the model could correctly follow. They found that larger models were much better at consistently executing longer sequences of steps, even if smaller models were perfect at single steps. They also compared this to newer 'thinking' models that can solve problems in one go and found those don't suffer from the same self-conditioning issues.
Why it matters?
This work helps explain why LLMs can sometimes seem brilliant at complex reasoning but then fail at surprisingly simple tasks when those tasks are extended. It highlights that scaling up model size is still incredibly valuable, especially for tasks that require many steps, and points to the importance of improving a model’s ability to reliably execute plans without getting tripped up by its own mistakes.
Abstract
Does continued scaling of large language models (LLMs) yield diminishing returns? Real-world value often stems from the length of task an agent can complete. We start this work by observing the simple but counterintuitive fact that marginal gains in single-step accuracy can compound into exponential improvements in the length of a task a model can successfully complete. Then, we argue that failures of LLMs when simple tasks are made longer arise from mistakes in execution, rather than an inability to reason. We propose isolating execution capability, by explicitly providing the knowledge and plan needed to solve a long-horizon task. We find that larger models can correctly execute significantly more turns even when small models have 100\% single-turn accuracy. We observe that the per-step accuracy of models degrades as the number of steps increases. This is not just due to long-context limitations -- curiously, we observe a self-conditioning effect -- models become more likely to make mistakes when the context contains their errors from prior turns. Self-conditioning does not reduce by just scaling the model size. In contrast, recent thinking models do not self-condition, and can also execute much longer tasks in a single turn. We conclude by benchmarking frontier thinking models on the length of task they can execute in a single turn. Overall, by focusing on the ability to execute, we hope to reconcile debates on how LLMs can solve complex reasoning problems yet fail at simple tasks when made longer, and highlight the massive benefits of scaling model size and sequential test-time compute for long-horizon tasks.