< Explain other AI papers

Synthetic Computers at Scale for Long-Horizon Productivity Simulation

Tao Ge, Baolin Peng, Hao Cheng, Jianfeng Gao

2026-05-01

Synthetic Computers at Scale for Long-Horizon Productivity Simulation

Summary

This paper introduces a new way to create realistic fake computer environments and then uses these environments to train AI agents to perform complex, long-term tasks like office work.

What's the problem?

Training AI to be truly productive, like a human worker, is hard because it requires understanding a lot of context. Real people work within specific computer setups – with folders, documents, and programs – that shape how they do their jobs. It’s difficult to recreate this level of realistic work context for AI training, and existing methods don't scale well to create many different work environments.

What's the solution?

The researchers developed a method called 'Synthetic Computers at Scale'. They automatically generate many different fake computer environments, complete with realistic folder structures and files like documents and spreadsheets. Then, they have one AI agent create work goals for a 'user' within that environment, and another AI agent act as that user, working through the computer to complete those goals. These simulations run for a long time, mimicking a month of work, and provide a lot of data for the AI to learn from.

Why it matters?

This work is important because it offers a way to create massive amounts of training data for AI agents in realistic work settings. By scaling up the creation of these 'synthetic computers', we can train AI to handle a wider variety of jobs and situations, ultimately leading to more capable and helpful AI assistants. It’s a step towards AI that can truly understand and participate in complex, real-world productivity tasks.

Abstract

Realistic long-horizon productivity work is strongly conditioned on user-specific computer environments, where much of the work context is stored and organized through directory structures and content-rich artifacts. To scale synthetic data creation for such productivity scenarios, we introduce Synthetic Computers at Scale, a scalable methodology for creating such environments with realistic folder hierarchies and content-rich artifacts (e.g., documents, spreadsheets, and presentations). Conditioned on each synthetic computer, we run long-horizon simulations: one agent creates productivity objectives that are specific to the computer's user and require multiple professional deliverables and about a month of human work; another agent then acts as that user and keeps working across the computer -- for example, navigating the filesystem for grounding, coordinating with simulated collaborators, and producing professional artifacts -- until these objectives are completed. In preliminary experiments, we create 1,000 synthetic computers and run long-horizon simulations on them; each run requires over 8 hours of agent runtime and spans more than 2,000 turns on average. These simulations produce rich experiential learning signals, whose effectiveness is validated by significant improvements in agent performance on both in-domain and out-of-domain productivity evaluations. Given that personas are abundant at billion scale, this methodology can in principle scale to millions or even billions of synthetic user worlds with sufficient compute, enabling broader coverage of diverse professions, roles, contexts, environments, and productivity needs. We argue that scalable synthetic computer creation, together with at-scale simulations, is highly promising as a foundational substrate for agent self-improvement and agentic reinforcement learning in long-horizon productivity scenarios.