< Explain other AI papers

ClawGym: A Scalable Framework for Building Effective Claw Agents

Fei Bai, Huatong Song, Shuang Sun, Daixuan Cheng, Yike Yang, Chuan Hao, Renyuan Li, Feng Chang, Yuan Wei, Ran Tao, Bryan Dai, Jian Yang, Wayne Xin Zhao

2026-04-30

ClawGym: A Scalable Framework for Building Effective Claw Agents

Summary

This paper introduces ClawGym, a new system designed to make it easier to build and test 'personal agents' – AI programs that help people with tasks on their computers using everyday tools and files.

What's the problem?

Currently, creating these kinds of helpful AI agents is difficult to scale up because there's no good way to automatically create lots of realistic practice scenarios for them to learn from, and no standard way to reliably test how well they're doing. It's hard to get enough good data to train them and ensure they work correctly in different situations.

What's the solution?

The researchers built ClawGym, which includes three main parts. First, they created a large dataset called ClawGym-SynData with over 13,000 simulated tasks, each with a 'persona' (a goal) and realistic files to work with. Then, they trained AI models, called ClawGym-Agents, using this data. Finally, they created a benchmark, ClawGym-Bench, with 200 tasks to evaluate how well the agents perform, using both automated checks and human review to ensure the tests are fair.

Why it matters?

This work is important because it provides a foundation for building more powerful and reliable personal AI assistants. By automating the creation of training data and providing a standardized way to evaluate agents, ClawGym makes it easier for researchers and developers to create AI that can genuinely help people with their everyday computer tasks.

Abstract

Claw-style environments support multi-step workflows over local files, tools, and persistent workspace states. However, scalable development around these environments remains constrained by the absence of a systematic framework, especially one for synthesizing verifiable training data and integrating it with agent training and diagnostic evaluation. To address this challenge, we present ClawGym, a scalable framework that supports the full lifecycle of Claw-style personal agent development. Concretely, we construct ClawGym-SynData, a diverse dataset of 13.5K filtered tasks synthesized from persona-driven intents and skill-grounded operations, paired with realistic mock workspaces and hybrid verification mechanisms. We then train a family of capable Claw-style models, termed ClawGym-Agents, through supervised fine-tuning on black-box rollout trajectories, and further explore reinforcement learning via a lightweight pipeline that parallelizes rollouts across per-task sandboxes.To support reliable evaluation, we further construct ClawGym-Bench, a benchmark of 200 instances calibrated through automated filtering and human-LLM review. Relevant resources will be soon released at https://github.com/ClawGym.