Beyond Pipelines: A Survey of the Paradigm Shift toward Model-Native Agentic AI
Jitao Sang, Jinlin Xiao, Jiarun Han, Jilin Chen, Xiaoyi Chen, Shuyu Wei, Yongjie Sun, Yuhang Wang
2025-10-21
Summary
This paper explores the exciting new world of 'agentic AI,' which is AI that doesn't just *respond* to instructions, but actually *acts* and figures things out on its own. It looks at how we're moving from building AI agents by connecting different parts together, to creating AI models that have these abilities built right into them.
What's the problem?
Traditionally, creating AI agents meant piecing together separate components for things like planning what to do, using tools, and remembering past experiences. This was complicated and didn't allow the AI to truly learn and adapt on its own. The challenge was to create AI that could independently reason, act, and improve through experience, rather than just following pre-programmed steps.
What's the solution?
The paper highlights how Reinforcement Learning (RL) is key to this shift. RL allows the AI to learn by trying things and getting feedback, similar to how humans learn. They show how RL, combined with powerful language models, is enabling AI to internalize those crucial abilities – planning, tool use, and memory – directly within the model itself. They also examine specific examples like AI that can do in-depth research and AI that can interact with computer interfaces, showing how these capabilities are evolving.
Why it matters?
This research is important because it points towards a future where AI isn't just a tool we *use*, but a system that can *grow* its intelligence over time. It’s a step towards creating more capable and adaptable AI that can tackle complex problems and interact with the world in a more meaningful way, moving beyond simply responding to commands to proactively achieving goals.
Abstract
The rapid evolution of agentic AI marks a new phase in artificial intelligence, where Large Language Models (LLMs) no longer merely respond but act, reason, and adapt. This survey traces the paradigm shift in building agentic AI: from Pipeline-based systems, where planning, tool use, and memory are orchestrated by external logic, to the emerging Model-native paradigm, where these capabilities are internalized within the model's parameters. We first position Reinforcement Learning (RL) as the algorithmic engine enabling this paradigm shift. By reframing learning from imitating static data to outcome-driven exploration, RL underpins a unified solution of LLM + RL + Task across language, vision and embodied domains. Building on this, the survey systematically reviews how each capability -- Planning, Tool use, and Memory -- has evolved from externally scripted modules to end-to-end learned behaviors. Furthermore, it examines how this paradigm shift has reshaped major agent applications, specifically the Deep Research agent emphasizing long-horizon reasoning and the GUI agent emphasizing embodied interaction. We conclude by discussing the continued internalization of agentic capabilities like Multi-agent collaboration and Reflection, alongside the evolving roles of the system and model layers in future agentic AI. Together, these developments outline a coherent trajectory toward model-native agentic AI as an integrated learning and interaction framework, marking the transition from constructing systems that apply intelligence to developing models that grow intelligence through experience.