< Explain other AI papers

Watch and Learn: Learning to Use Computers from Online Videos

Chan Hee Song, Yiwen Song, Palash Goyal, Yu Su, Oriana Riva, Hamid Palangi, Tomas Pfister

2025-10-07

Watch and Learn: Learning to Use Computers from Online Videos

Summary

This paper introduces a new way to teach computer programs, called Computer Use Agents (CUAs), how to perform tasks on computers by learning from videos of people using software.

What's the problem?

Teaching CUAs is hard because it requires a lot of example data showing them how to do things. Creating this data is expensive and time-consuming, and existing datasets are limited to specific programs and don't change over time. Simply trying to *make* example data often results in unrealistic or incorrect demonstrations.

What's the solution?

The researchers developed a system called Watch & Learn that automatically turns readily available online videos of people using computers into step-by-step instructions a CUA can follow. Instead of trying to directly create these instructions, the system figures out what action a person likely took on the computer just by looking at how the screen changed from one moment to the next. This is like figuring out what someone did by watching the result of their actions, and it’s a more reliable way to learn.

Why it matters?

This work is important because it unlocks a huge source of training data for CUAs – the vast number of videos already on the internet. By learning from these videos, CUAs can become much better at performing tasks in a variety of real-world applications, bringing us closer to having helpful computer assistants that can actually understand and interact with our software.

Abstract

Computer use agents (CUAs) need to plan task workflows grounded in diverse, ever-changing applications and environments, but learning is hindered by the scarcity of large-scale, high-quality training data in the target application. Existing datasets are domain-specific, static, and costly to annotate, while current synthetic data generation methods often yield simplistic or misaligned task demonstrations. To address these limitations, we introduce Watch & Learn (W&L), a framework that converts human demonstration videos readily available on the Internet into executable UI trajectories at scale. Instead of directly generating trajectories or relying on ad hoc reasoning heuristics, we cast the problem as an inverse dynamics objective: predicting the user's action from consecutive screen states. This formulation reduces manual engineering, is easier to learn, and generalizes more robustly across applications. Concretely, we develop an inverse dynamics labeling pipeline with task-aware video retrieval, generate over 53k high-quality trajectories from raw web videos, and demonstrate that these trajectories improve CUAs both as in-context demonstrations and as supervised training data. On the challenging OSWorld benchmark, UI trajectories extracted with W&L consistently enhance both general-purpose and state-of-the-art frameworks in-context, and deliver stronger gains for open-source models under supervised training. These results highlight web-scale human demonstration videos as a practical and scalable foundation for advancing CUAs towards real-world deployment.