< Explain other AI papers

Evolving Programmatic Skill Networks

Haochen Shi, Xingdi Yuan, Bang Liu

2026-01-08

Evolving Programmatic Skill Networks

Summary

This research focuses on teaching an AI agent to learn and improve skills over time in a complex, open-ended environment, like a video game. The goal is for the agent to not just learn individual tasks, but to build up a reusable library of skills it can combine to solve new problems.

What's the problem?

Typically, AI agents struggle to continually learn new skills without forgetting or messing up previously learned ones. When faced with a constantly changing environment and new tasks, it's hard for them to build a robust and adaptable skillset. Existing methods often require retraining from scratch or have trouble effectively reusing what they've already learned, leading to inefficiency and limited performance in dynamic situations.

What's the solution?

The researchers developed a system called the Programmatic Skill Network, or PSN. This system uses large language models to create skills that are essentially small computer programs. These programs can be combined and modified. PSN has three main parts: first, it can pinpoint exactly *where* a skill is failing when it doesn't work as expected. Second, it carefully updates skills, protecting those that are reliable while still allowing uncertain skills to improve. Finally, it reorganizes the network of skills to keep it efficient and prevent it from becoming overly complex, always with a safety check to ensure changes don't break things. The way PSN learns even mirrors how neural networks are trained, suggesting a fundamental connection between these approaches.

Why it matters?

This work is important because it represents a step towards creating AI agents that can truly learn and adapt like humans. By allowing agents to build and refine a library of reusable skills, we can create systems that are more efficient, robust, and capable of handling unpredictable real-world scenarios. This has implications for robotics, game playing, and potentially many other fields where adaptable AI is needed.

Abstract

We study continual skill acquisition in open-ended embodied environments where an agent must construct, refine, and reuse an expanding library of executable skills. We introduce the Programmatic Skill Network (PSN), a framework in which skills are executable symbolic programs forming a compositional network that evolves through experience. PSN defines three core mechanisms instantiated via large language models: (1)REFLECT for structured fault localization over skill compositions, (2) progressive optimization with maturity-aware update gating that stabilizes reliable skills while maintaining plasticity for uncertain ones, and (3) canonical structural refactoring under rollback validation that maintains network compactness. We further show that PSN's learning dynamics exhibit structural parallels to neural network training. Experiments on MineDojo and Crafter demonstrate robust skill reuse, rapid adaptation, and strong generalization across open-ended task distributions.\footnote{We plan to open-source the code.