Explore to Evolve: Scaling Evolved Aggregation Logic via Proactive Online Exploration for Deep Research Agents
Rui Wang, Ce Zhang, Jun-Yu Ma, Jianshu Zhang, Hongru Wang, Yi Chen, Boyang Xue, Tianqing Fang, Zhisong Zhang, Hongming Zhang, Haitao Mi, Dong Yu, Kam-Fai Wong
2025-10-20
Summary
This paper focuses on building better AI agents that can do real research on the internet, going beyond just finding information to actually understanding and combining it to answer questions.
What's the problem?
Current AI agents are good at *finding* information online, like searching the web for specific facts. However, they struggle with the more complex task of *understanding* multiple sources, putting the information together, and drawing conclusions – essentially, they can't really do in-depth research because they lack the ability to aggregate information effectively.
What's the solution?
The researchers developed a new method called 'Explore to Evolve'. The agent first explores the web to gather information. Then, it automatically builds a program to combine this information and create question-answer pairs that can be verified. They created a large dataset, WebAggregatorQA, using this method, and used it to train new AI models, called WebAggregator, which are built on an existing open-source framework. These models learn to not just find information, but to synthesize it.
Why it matters?
This work is important because it shows that AI agents can be taught to do more than just search. The new WebAggregator models perform as well as, or even better than, leading models like GPT-4 and Claude-3 at tasks requiring information aggregation. It also highlights that even the best current AI models still struggle with this crucial skill, meaning improving information aggregation is key to building truly intelligent research assistants.
Abstract
Deep research web agents not only retrieve information from diverse sources such as web environments, files, and multimodal inputs, but more importantly, they need to rigorously analyze and aggregate knowledge for insightful research. However, existing open-source deep research agents predominantly focus on enhancing information-seeking capabilities of web agents to locate specific information, while overlooking the essential need for information aggregation, which would limit their ability to support in-depth research. We propose an Explore to Evolve paradigm to scalably construct verifiable training data for web agents. Begins with proactive online exploration, an agent sources grounded information by exploring the real web. Using the collected evidence, the agent then self-evolves an aggregation program by selecting, composing, and refining operations from 12 high-level logical types to synthesize a verifiable QA pair. This evolution from high-level guidance to concrete operations allowed us to scalably produce WebAggregatorQA, a dataset of 10K samples across 50K websites and 11 domains. Based on an open-source agent framework, SmolAgents, we collect supervised fine-tuning trajectories to develop a series of foundation models, WebAggregator. WebAggregator-8B matches the performance of GPT-4.1, while the 32B variant surpasses GPT-4.1 by more than 10% on GAIA-text and closely approaches Claude-3.7-sonnet. Moreover, given the limited availability of benchmarks that evaluate web agents' information aggregation abilities, we construct a human-annotated evaluation split of WebAggregatorQA as a challenging test set. On this benchmark, Claude-3.7-sonnet only achieves 28%, and GPT-4.1 scores 25.8%. Even when agents manage to retrieve all references, they still struggle on WebAggregatorQA, highlighting the need to strengthen the information aggregation capabilities of web agent foundations.