AgentDropoutV2: Optimizing Information Flow in Multi-Agent Systems via Test-Time Rectify-or-Reject Pruning
Yutong Wang, Siyuan Xiong, Xuebo Liu, Wenkang Zhou, Liang Ding, Miao Zhang, Min Zhang
2026-02-27
Summary
This paper introduces a new method called AgentDropoutV2 to improve the reliability of systems where multiple 'agents' work together to solve problems, specifically focusing on preventing errors made by one agent from ruining the whole team's performance.
What's the problem?
When you have a team of computer programs (agents) trying to figure something out, a mistake by just one program can quickly spread and cause the entire system to fail. Current ways to fix this either require a lot of upfront planning and building, or constantly adjusting the system which is expensive and doesn't work well when the problems change. Basically, it's hard to make these multi-agent systems robust and adaptable without a lot of effort.
What's the solution?
AgentDropoutV2 works like an active firewall during problem-solving. It checks each agent's answer and tries to correct it using information retrieved from past mistakes. It learns to recognize common error patterns and uses this knowledge to fix outputs. If an answer can't be fixed, it's discarded to prevent it from affecting the others, but the system has a backup plan to keep things running smoothly. Importantly, this all happens *while* the system is working, without needing to retrain the agents.
Why it matters?
This research is important because it offers a way to build more dependable and flexible multi-agent systems. The tests show a significant improvement in accuracy on challenging math problems, and the system can adjust to different levels of difficulty. This means we can create more reliable AI teams that can handle complex tasks and adapt to new situations without constant human intervention.
Abstract
While Multi-Agent Systems (MAS) excel in complex reasoning, they suffer from the cascading impact of erroneous information generated by individual participants. Current solutions often resort to rigid structural engineering or expensive fine-tuning, limiting their deployability and adaptability. We propose AgentDropoutV2, a test-time rectify-or-reject pruning framework designed to dynamically optimize MAS information flow without retraining. Our approach acts as an active firewall, intercepting agent outputs and employing a retrieval-augmented rectifier to iteratively correct errors based on a failure-driven indicator pool. This mechanism allows for the precise identification of potential errors using distilled failure patterns as prior knowledge. Irreparable outputs are subsequently pruned to prevent error propagation, while a fallback strategy preserves system integrity. Empirical results on extensive math benchmarks show that AgentDropoutV2 significantly boosts the MAS's task performance, achieving an average accuracy gain of 6.3 percentage points on math benchmarks. Furthermore, the system exhibits robust generalization and adaptivity, dynamically modulating rectification efforts based on task difficulty while leveraging context-aware indicators to resolve a wide spectrum of error patterns. Our code and dataset are released at https://github.com/TonySY2/AgentDropoutV2.