Solving a Million-Step LLM Task with Zero Errors
Elliot Meyerson, Giuseppe Paolo, Roberto Dailey, Hormoz Shahrzad, Olivier Francon, Conor F. Hayes, Xin Qiu, Babak Hodjat, Risto Miikkulainen
2025-11-14
Summary
This paper introduces a new system called MAKER that can successfully complete incredibly complex tasks using large language models (LLMs), specifically tasks requiring over a million steps without making any errors.
What's the problem?
While LLMs are getting better at reasoning and using tools, they struggle when you need them to perform a long series of steps to solve a problem. They tend to make mistakes that build up over time, making it impossible to reliably complete tasks that require many dependent actions, like a human might do in a complex job or project. Think of it like trying to build a really tall tower of blocks – eventually, it's going to fall over.
What's the solution?
The researchers solved this problem by breaking down the big task into a huge number of very small, manageable subtasks. Each of these smaller tasks is handled by a specialized 'microagent'. Because the task is so divided, if one microagent makes a mistake, it can be easily corrected using a voting system where multiple agents check each other's work. This combination of breaking things down and checking for errors allows the system to scale up to a massive number of steps.
Why it matters?
This research suggests that instead of just trying to make LLMs individually smarter, we can achieve much more by creating systems where many simpler LLM-powered agents work together in a highly organized way. This could eventually allow us to tackle problems that are currently too complex for even the best LLMs, potentially at the scale of entire organizations or even society.
Abstract
LLMs have achieved remarkable breakthroughs in reasoning, insights, and tool use, but chaining these abilities into extended processes at the scale of those routinely executed by humans, organizations, and societies has remained out of reach. The models have a persistent error rate that prevents scale-up: for instance, recent experiments in the Towers of Hanoi benchmark domain showed that the process inevitably becomes derailed after at most a few hundred steps. Thus, although LLM research is often still benchmarked on tasks with relatively few dependent logical steps, there is increasing attention on the ability (or inability) of LLMs to perform long range tasks. This paper describes MAKER, the first system that successfully solves a task with over one million LLM steps with zero errors, and, in principle, scales far beyond this level. The approach relies on an extreme decomposition of a task into subtasks, each of which can be tackled by focused microagents. The high level of modularity resulting from the decomposition allows error correction to be applied at each step through an efficient multi-agent voting scheme. This combination of extreme decomposition and error correction makes scaling possible. Thus, the results suggest that instead of relying on continual improvement of current LLMs, massively decomposed agentic processes (MDAPs) may provide a way to efficiently solve problems at the level of organizations and societies.