XSkill: Continual Learning from Experience and Skills in Multimodal Agents
Guanyu Jiang, Zhaochen Su, Xiaoye Qu, Yi R., Fung
2026-03-13
Summary
This paper introduces a new way to help AI agents, specifically those that use multiple tools like looking things up online or using calculators, get better at solving complex problems over time without needing to be constantly retrained.
What's the problem?
Current AI agents that use tools often aren't very efficient – they might use the wrong tools or struggle to figure out the best order to use them in. A big issue is that these agents usually need to be completely updated with new information every time they encounter a new situation, instead of learning from what they've already done. They need a way to remember and reuse helpful strategies.
What's the solution?
The researchers developed a system called XSkill. It works by creating two types of 'knowledge': 'experiences' which are like quick tips for using specific tools, and 'skills' which are more like step-by-step plans for tackling bigger tasks. XSkill learns these by watching the agent try different approaches to a problem, summarizing what worked well, and even critiquing its own past attempts. Importantly, it uses what the agent *sees* (visual information) to understand and remember these strategies, and then uses that knowledge to improve its performance on new, similar problems. It's a continuous cycle of learning and improvement.
Why it matters?
This research is important because it moves AI agents closer to being truly helpful and adaptable. By allowing agents to learn from their past experiences without constant retraining, we can create systems that are more efficient, flexible, and capable of handling a wider range of real-world challenges. The ability to generalize to new situations without specific training is a key step towards more intelligent AI.
Abstract
Multimodal agents can now tackle complex reasoning tasks with diverse tools, yet they still suffer from inefficient tool use and inflexible orchestration in open-ended settings. A central challenge is enabling such agents to continually improve without parameter updates by learning from past trajectories. We identify two complementary forms of reusable knowledge essential for this goal: experiences, providing concise action-level guidance for tool selection and decision making, and skills, providing structured task-level guidance for planning and tool use. To this end, we propose XSkill, a dual-stream framework for continual learning from experience and skills in multimodal agents. XSkill grounds both knowledge extraction and retrieval in visual observations. During accumulation, XSkill distills and consolidates experiences and skills from multi-path rollouts via visually grounded summarization and cross-rollout critique. During inference, it retrieves and adapts this knowledge to the current visual context and feeds usage history back into accumulation to form a continual learning loop. Evaluated on five benchmarks across diverse domains with four backbone models, XSkill consistently and substantially outperforms both tool-only and learning-based baselines. Further analysis reveals that the two knowledge streams play complementary roles in influencing the reasoning behaviors of agents and show superior zero-shot generalization.