Symbolic Learning Enables Self-Evolving Agents
Wangchunshu Zhou, Yixin Ou, Shengwei Ding, Long Li, Jialong Wu, Tiannan Wang, Jiamin Chen, Shuai Wang, Xiaohua Xu, Ningyu Zhang, Huajun Chen, Yuchen Eleanor Jiang
2024-06-27

Summary
This paper talks about a new framework called agent symbolic learning, which allows language agents (AI systems that understand and generate text) to improve themselves automatically after they are created. This approach aims to help these agents learn from their experiences in the real world, moving towards a goal of artificial general intelligence (AGI).
What's the problem?
Currently, most language agents rely heavily on human experts to manually adjust their prompts, tools, and processes. This model-centric approach limits their ability to learn and adapt on their own, making it difficult for them to handle complex tasks or evolve as needed. Without the ability to learn from data autonomously, these agents may not reach their full potential.
What's the solution?
The authors propose a systematic framework called agent symbolic learning that treats language agents as networks where the components (like prompts and tools) can be optimized together. This framework uses techniques similar to those used in training neural networks, such as back-propagation and gradient descent, but instead of numeric weights, it uses natural language representations. By allowing the agents to update themselves based on their interactions and outcomes, they can become 'self-evolving' and better adapt to new situations over time.
Why it matters?
This research is important because it represents a shift from needing constant human intervention to a more autonomous learning process for AI systems. By enabling language agents to learn from their experiences and improve themselves, this framework could lead to more advanced AI that can perform a wider range of tasks effectively. This step is crucial for moving towards AGI, where machines can think and learn like humans.
Abstract
The AI community has been exploring a pathway to artificial general intelligence (AGI) by developing "language agents", which are complex large language models (LLMs) pipelines involving both prompting techniques and tool usage methods. While language agents have demonstrated impressive capabilities for many real-world tasks, a fundamental limitation of current language agents research is that they are model-centric, or engineering-centric. That's to say, the progress on prompts, tools, and pipelines of language agents requires substantial manual engineering efforts from human experts rather than automatically learning from data. We believe the transition from model-centric, or engineering-centric, to data-centric, i.e., the ability of language agents to autonomously learn and evolve in environments, is the key for them to possibly achieve AGI. In this work, we introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own in a data-centric way using symbolic optimizers. Specifically, we consider agents as symbolic networks where learnable weights are defined by prompts, tools, and the way they are stacked together. Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning: back-propagation and gradient descent. Instead of dealing with numeric weights, agent symbolic learning works with natural language simulacrums of weights, loss, and gradients. We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks and show that agent symbolic learning enables language agents to update themselves after being created and deployed in the wild, resulting in "self-evolving agents".