< Explain other AI papers

Towards a Science of Scaling Agent Systems

Yubin Kim, Ken Gu, Chanwoo Park, Chunjong Park, Samuel Schmidgall, A. Ali Heydari, Yao Yan, Zhihan Zhang, Yuchen Zhuang, Mark Malhotra, Paul Pu Liang, Hae Won Park, Yuzhe Yang, Xuhai Xu, Yilun Du, Shwetak Patel, Tim Althoff, Daniel McDuff, Xin Liu

2025-12-11

Towards a Science of Scaling Agent Systems

Summary

This paper investigates how well AI agents, built using large language models, perform as you add more of them to work on a problem, and what factors influence that performance.

What's the problem?

Currently, building AI agents involves a lot of trial and error because we don't have a solid understanding of *why* some agent setups work better than others. People rely on guesswork instead of a clear set of rules for designing these systems, making it hard to predict how adding more agents will affect the outcome.

What's the solution?

The researchers ran a huge number of experiments with different agent setups – varying how the agents communicate (working independently, with a central controller, or in a more distributed way) and the types of tasks they tackled (like financial analysis, web browsing, and planning). They then analyzed the results to find patterns and create a model that can predict how well an agent system will perform based on things like how efficiently the agents coordinate, how much errors get amplified, and how much overlap there is in their work. They found that simply adding more agents doesn't always help, and the best way to organize them depends on the task.

Why it matters?

This work is important because it provides a set of guidelines for building better AI agent systems. Instead of just guessing, developers can use these findings to choose the right agent architecture and coordination strategy for a specific task, potentially leading to much more effective and reliable AI applications. It gives us a way to understand the trade-offs involved in scaling up AI agents and helps predict when adding more agents will actually be beneficial.

Abstract

Agents, language model (LM)-based systems that are capable of reasoning, planning, and acting are becoming the dominant paradigm for real-world AI applications. Despite this widespread adoption, the principles that determine their performance remain underexplored, leaving practitioners to rely on heuristics rather than principled design choices. We address this gap by deriving quantitative scaling principles for agent systems. We evaluate this across four diverse benchmarks: Finance-Agent, BrowseComp-Plus, PlanCraft, and Workbench. Using five canonical architectures (Single, Independent, Centralized, Decentralized, Hybrid) instantiated across three LLM families, we perform a controlled evaluation spanning 180 configurations with standardized tools and token budgets. We derive a predictive model using empirical coordination metrics, including efficiency, overhead, error amplification, and redundancy, that achieves cross-validated R^2=0.513. We identify three dominant effects: (1) a tool-coordination trade-off: under fixed computational budgets, tool-heavy tasks suffer disproportionately from multi-agent overhead. (2) a capability saturation: coordination yields diminishing or negative returns (beta=-0.408, p<0.001) once single-agent baselines exceed ~45%. (3) topology-dependent error amplification: independent agents amplify errors 17.2x through unchecked propagation, while centralized coordination contains this to 4.4x. Centralized coordination improves performance by 80.9% on parallelizable tasks like financial reasoning, while decentralized coordination excels on dynamic web navigation (+9.2% vs. +0.2%). Yet for sequential reasoning tasks, all multi-agent variants degraded performance by 39-70%. The framework predicts the optimal coordination strategy for 87% of held-out configurations, providing a predictive principle of agentic scaling based on measurable task properties.