DoVer: Intervention-Driven Auto Debugging for LLM Multi-Agent Systems
Ming Ma, Jue Zhang, Fangkai Yang, Yu Kang, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang
2025-12-09
Summary
This paper introduces a new way to find and fix problems in systems made up of multiple 'agents' powered by large language models, like ChatGPT. These systems can be complex, and figuring out *why* they fail is hard.
What's the problem?
Currently, when these multi-agent systems fail, people try to use the language models themselves to analyze logs – basically, a record of what happened – to pinpoint the exact agent and step that caused the error. However, this method has two big issues. First, it just *guesses* what went wrong without actually testing if the guess is correct. Second, it often tries to blame just one part of the system, when in reality, multiple changes could fix the problem, making the single-point blame inaccurate.
What's the solution?
The researchers developed a framework called DoVer that actively *tests* potential fixes. Instead of just suggesting what’s wrong, DoVer tries making small changes – like rewriting a message an agent sent or adjusting its plan – and sees if that solves the problem. They also stopped focusing on whether DoVer correctly *identified* the cause of the failure and instead focused on whether DoVer actually *fixed* the failure or made progress towards a solution. They tested DoVer on several different setups of these agent systems and found it could successfully fix a significant number of failed attempts.
Why it matters?
This work is important because it shows a practical way to make these complex, AI-powered systems more reliable. By actively testing fixes instead of just guessing, and by focusing on outcomes rather than pinpointing blame, DoVer offers a more effective approach to debugging and opens the door for building more robust and scalable multi-agent systems.
Abstract
Large language model (LLM)-based multi-agent systems are challenging to debug because failures often arise from long, branching interaction traces. The prevailing practice is to leverage LLMs for log-based failure localization, attributing errors to a specific agent and step. However, this paradigm has two key limitations: (i) log-only debugging lacks validation, producing untested hypotheses, and (ii) single-step or single-agent attribution is often ill-posed, as we find that multiple distinct interventions can independently repair the failed task. To address the first limitation, we introduce DoVer, an intervention-driven debugging framework, which augments hypothesis generation with active verification through targeted interventions (e.g., editing messages, altering plans). For the second limitation, rather than evaluating on attribution accuracy, we focus on measuring whether the system resolves the failure or makes quantifiable progress toward task success, reflecting a more outcome-oriented view of debugging. Within the Magnetic-One agent framework, on the datasets derived from GAIA and AssistantBench, DoVer flips 18-28% of failed trials into successes, achieves up to 16% milestone progress, and validates or refutes 30-60% of failure hypotheses. DoVer also performs effectively on a different dataset (GSMPlus) and agent framework (AG2), where it recovers 49% of failed trials. These results highlight intervention as a practical mechanism for improving reliability in agentic systems and open opportunities for more robust, scalable debugging methods for LLM-based multi-agent systems. Project website and code will be available at https://aka.ms/DoVer.