< Explain other AI papers

SkillClaw: Let Skills Evolve Collectively with Agentic Evolver

Ziyu Ma, Shidong Yang, Yuxiang Ji, Xucong Wang, Yong Wang, Yiming Hu, Tongwen Huang, Xiangxiang Chu

2026-04-10

SkillClaw: Let Skills Evolve Collectively with Agentic Evolver

Summary

This paper introduces SkillClaw, a system designed to help AI agents, specifically those built using large language models, get better at tasks over time by learning from the experiences of *all* their users, not just one.

What's the problem?

Currently, AI agents like OpenClaw have a set of pre-defined skills that don't really change much after they're released. This means that if many users find the same better way to do something, or keep running into the same problem, the agent doesn't learn from it. Each user essentially starts from scratch, rediscovering solutions and failures. The system struggles to combine the knowledge gained from different users' interactions to improve its skills.

What's the solution?

SkillClaw tackles this by constantly watching how users interact with the agent. It records these interactions, identifies patterns in successful and unsuccessful attempts, and then automatically updates the agent's skills. It can either refine existing skills or create entirely new ones based on what it learns. These improved skills are then shared with all users, so everyone benefits from the collective experience.

Why it matters?

This is important because it allows AI agents to continuously improve and become more effective. Instead of being limited by their initial programming, they can learn and adapt based on real-world use. This leads to better performance, less frustration for users, and a more powerful AI system overall, as demonstrated by improvements to the Qwen3-Max model on a challenging benchmark.

Abstract

Large language model (LLM) agents such as OpenClaw rely on reusable skills to perform complex tasks, yet these skills remain largely static after deployment. As a result, similar workflows, tool usage patterns, and failure modes are repeatedly rediscovered across users, preventing the system from improving with experience. While interactions from different users provide complementary signals about when a skill works or fails, existing systems lack a mechanism to convert such heterogeneous experiences into reliable skill updates. To address these issues, we present SkillClaw, a framework for collective skill evolution in multi-user agent ecosystems, which treats cross-user and over-time interactions as the primary signal for improving skills. SkillClaw continuously aggregates trajectories generated during use and processes them with an autonomous evolver, which identifies recurring behavioral patterns and translates them into updates to the skill set by refining existing skills or extending them with new capabilities. The resulting skills are maintained in a shared repository and synchronized across users, allowing improvements discovered in one context to propagate system-wide while requiring no additional effort from users. By integrating multi-user experience into ongoing skill updates, SkillClaw enables cross-user knowledge transfer and cumulative capability improvement, and experiments on WildClawBench show that limited interaction and feedback, it significantly improves the performance of Qwen3-Max in real-world agent scenarios.