< Explain other AI papers

Reinforcement Learning Foundations for Deep Research Systems: A Survey

Wenjun Li, Zhi Chen, Jingru Lin, Hannan Cao, Wei Han, Sheng Liang, Zhi Zhang, Kuicai Dong, Dexun Li, Chen Zhang, Yong Liu

2025-09-09

Reinforcement Learning Foundations for Deep Research Systems: A Survey

Summary

This paper is a comprehensive look at how to build and train complex AI systems, often called 'deep research systems,' that can perform tasks requiring multiple steps, like researching a topic online and writing a report. These systems use different parts – a planner to decide what to do, a coordinator to manage things, and executors to actually carry out the tasks.

What's the problem?

Currently, building these AI systems is difficult. Training all the parts together is too complicated, so researchers usually focus on training just the 'planner' part. However, simply copying how humans solve these tasks (through a method called SFT) has limitations. It can lead to the AI just mimicking the training data and not truly learning, and it struggles with complex goals that require balancing different priorities. Also, current methods rely heavily on humans to define exactly *how* the AI should break down a problem, which limits the AI's ability to learn on its own.

What's the solution?

The paper argues that using reinforcement learning (RL) is a better approach. RL allows the AI to learn through trial and error, interacting with its environment (like the internet and files) and receiving rewards for good outcomes. This helps the AI explore different strategies, recover from mistakes, and learn without needing as much human guidance. The paper then organizes and analyzes recent research using RL to build these deep research systems, looking at how to create good training data, improve the RL methods themselves, and build the overall training systems.

Why it matters?

This research is important because it provides a roadmap for building more capable and independent AI systems. By focusing on RL, we can move away from relying on human-defined rules and biases, and create AI that can truly learn and adapt to solve complex problems in a more robust and transparent way. It identifies key challenges and offers practical advice for anyone trying to build these kinds of AI agents.

Abstract

Deep research systems, agentic AI that solve complex, multi-step tasks by coordinating reasoning, search across the open web and user files, and tool use, are moving toward hierarchical deployments with a Planner, Coordinator, and Executors. In practice, training entire stacks end-to-end remains impractical, so most work trains a single planner connected to core tools such as search, browsing, and code. While SFT imparts protocol fidelity, it suffers from imitation and exposure biases and underuses environment feedback. Preference alignment methods such as DPO are schema and proxy-dependent, off-policy, and weak for long-horizon credit assignment and multi-objective trade-offs. A further limitation of SFT and DPO is their reliance on human defined decision points and subskills through schema design and labeled comparisons. Reinforcement learning aligns with closed-loop, tool-interaction research by optimizing trajectory-level policies, enabling exploration, recovery behaviors, and principled credit assignment, and it reduces dependence on such human priors and rater biases. This survey is, to our knowledge, the first dedicated to the RL foundations of deep research systems. It systematizes work after DeepSeek-R1 along three axes: (i) data synthesis and curation; (ii) RL methods for agentic research covering stability, sample efficiency, long context handling, reward and credit design, multi-objective optimization, and multimodal integration; and (iii) agentic RL training systems and frameworks. We also cover agent architecture and coordination, as well as evaluation and benchmarks, including recent QA, VQA, long-form synthesis, and domain-grounded, tool-interaction tasks. We distill recurring patterns, surface infrastructure bottlenecks, and offer practical guidance for training robust, transparent deep research agents with RL.