< Explain other AI papers

EvoCUA: Evolving Computer Use Agents via Learning from Scalable Synthetic Experience

Taofeng Xue, Chong Peng, Mianqiu Huang, Linsen Guo, Tiancheng Han, Haozhe Wang, Jianing Wang, Xiaocheng Zhang, Xin Yang, Dengchang Zhao, Jinrui Ding, Xiandi Ma, Yuchen Xie, Peng Pei, Xunliang Cai, Xipeng Qiu

2026-01-23

EvoCUA: Evolving Computer Use Agents via Learning from Scalable Synthetic Experience

Summary

This paper introduces a new AI agent, EvoCUA, designed to use computers like a human. It's a big step forward in creating AI that can perform complex tasks on a computer without needing constant human guidance.

What's the problem?

Current AI agents that try to use computers learn by watching examples, like a student copying notes. This works okay for simple tasks, but it falls apart when things get complicated and require a lot of steps. The problem is that it's hard to collect enough examples to cover everything the AI might encounter, and these agents don't really *understand* why things work, they just imitate.

What's the solution?

The researchers created EvoCUA, which doesn't just copy examples. Instead, it learns by *doing*. It creates its own tasks, checks if it succeeds, and then uses its mistakes to improve. They built a system that lets EvoCUA practice these tasks thousands of times at once, and a way for it to analyze its failures and figure out how to do better next time. This creates a cycle of learning and improvement, like evolution.

Why it matters?

EvoCUA is a significant improvement over previous AI agents, achieving a much higher success rate on computer tasks. Importantly, it works well with different types of AI 'brains' (foundation models), meaning it's a scalable solution. This research shows a promising path towards creating AI that can truly master computer use, which could automate many tasks and unlock new possibilities.

Abstract

The development of native computer-use agents (CUA) represents a significant leap in multimodal AI. However, their potential is currently bottlenecked by the constraints of static data scaling. Existing paradigms relying primarily on passive imitation of static datasets struggle to capture the intricate causal dynamics inherent in long-horizon computer tasks. In this work, we introduce EvoCUA, a native computer use agentic model. Unlike static imitation, EvoCUA integrates data generation and policy optimization into a self-sustaining evolutionary cycle. To mitigate data scarcity, we develop a verifiable synthesis engine that autonomously generates diverse tasks coupled with executable validators. To enable large-scale experience acquisition, we design a scalable infrastructure orchestrating tens of thousands of asynchronous sandbox rollouts. Building on these massive trajectories, we propose an iterative evolving learning strategy to efficiently internalize this experience. This mechanism dynamically regulates policy updates by identifying capability boundaries -- reinforcing successful routines while transforming failure trajectories into rich supervision through error analysis and self-correction. Empirical evaluations on the OSWorld benchmark demonstrate that EvoCUA achieves a success rate of 56.7%, establishing a new open-source state-of-the-art. Notably, EvoCUA significantly outperforms the previous best open-source model, OpenCUA-72B (45.0%), and surpasses leading closed-weights models such as UI-TARS-2 (53.1%). Crucially, our results underscore the generalizability of this approach: the evolving paradigm driven by learning from experience yields consistent performance gains across foundation models of varying scales, establishing a robust and scalable path for advancing native agent capabilities.