Externalization in LLM Agents: A Unified Review of Memory, Skills, Protocols and Harness Engineering
Chenyu Zhou, Huacan Chai, Wenteng Chen, Zihan Guo, Rong Shan, Yuanyi Song, Tianyi Xu, Yingxuan Yang, Aofan Yu, Weiming Zhang, Congming Zheng, Jiachen Zhu, Zeyu Zheng, Zhuosheng Zhang, Xingyu Lou, Changwang Zhang, Zhihui Fu, Jun Wang, Weiwen Liu, Jianghao Lin, Weinan Zhang
2026-04-10
Summary
This paper discusses how we're building smarter AI systems, not just by making the core AI 'brain' bigger, but by giving it tools and a better environment to work in.
What's the problem?
Traditionally, making AI better meant changing the AI model itself – tweaking its internal settings. But this is becoming harder and less efficient. Early AI systems tried to handle everything internally, which meant they struggled with things like remembering information over long periods, performing complex tasks, or interacting smoothly with people. Essentially, asking the AI to do too much at once makes it unreliable.
What's the solution?
The paper argues that we're now 'externalizing' capabilities. Instead of the AI trying to do everything itself, we're giving it external tools like memory banks to store information, pre-built 'skills' to handle specific tasks, and clear rules for how to interact. Think of it like giving a student notes, a calculator, and a study guide instead of expecting them to memorize everything. The paper breaks down how these external tools – memory, skills, and interaction protocols – work together, and how a 'harness' manages everything to keep it running smoothly. They also look at how this approach has evolved over time, moving from focusing solely on the AI model to building a complete system around it.
Why it matters?
This shift is important because it means we can build more capable and reliable AI agents even without constantly needing to create bigger and more complex AI models. It provides a framework for understanding how to best build these systems, and points to future directions like AI systems that can automatically improve their own tools and share resources with other AI agents. Ultimately, it suggests that the future of AI isn't just about better models, but about better systems built *around* those models.
Abstract
Large language model (LLM) agents are increasingly built less by changing model weights than by reorganizing the runtime around them. Capabilities that earlier systems expected the model to recover internally are now externalized into memory stores, reusable skills, interaction protocols, and the surrounding harness that makes these modules reliable in practice. This paper reviews that shift through the lens of externalization. Drawing on the idea of cognitive artifacts, we argue that agent infrastructure matters not merely because it adds auxiliary components, but because it transforms hard cognitive burdens into forms that the model can solve more reliably. Under this view, memory externalizes state across time, skills externalize procedural expertise, protocols externalize interaction structure, and harness engineering serves as the unification layer that coordinates them into governed execution. We trace a historical progression from weights to context to harness, analyze memory, skills, and protocols as three distinct but coupled forms of externalization, and examine how they interact inside a larger agent system. We further discuss the trade-off between parametric and externalized capability, identify emerging directions such as self-evolving harnesses and shared agent infrastructure, and discuss open challenges in evaluation, governance, and the long-term co-evolution of models and external infrastructure. The result is a systems-level framework for explaining why practical agent progress increasingly depends not only on stronger models, but on better external cognitive infrastructure.