GUI-360: A Comprehensive Dataset and Benchmark for Computer-Using Agents
Jian Mu, Chaoyun Zhang, Chiming Ni, Lu Wang, Bo Qiao, Kartik Mathur, Qianhui Wu, Yuhang Xie, Xiaojun Ma, Mengyu Zhou, Si Qin, Liqun Li, Yu Kang, Minghua Ma, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang
2025-11-07
Summary
This paper introduces GUI-360, a large collection of data and tests designed to help build computer agents that can effectively use software like humans do, specifically within Windows office applications.
What's the problem?
Creating computer agents that can reliably use graphical user interfaces (GUIs) is hard because there aren't enough real-world tasks for them to learn from, it's difficult to automatically record and label how these agents interact with screens, and there hasn't been a single standard way to test all the skills these agents need – understanding what’s on the screen, figuring out what to do, and then actually doing it.
What's the solution?
The researchers built a system that uses a large language model (like the one powering chatbots) to automatically create tasks, set up the software environment, run the tasks, and then check the quality of the results. This process generated over 1.2 million steps of interaction with programs like Word and Excel, including screenshots and information about what the agent was trying to achieve. They also created a set of tests to evaluate how well agents can understand the screen, parse information, and predict the correct actions.
Why it matters?
This work is important because it provides a much-needed resource for researchers working on computer agents. It shows that current AI models still struggle with these tasks and highlights areas where improvement is needed. By making the dataset publicly available, the researchers hope to speed up the development of more reliable and helpful desktop computer agents.
Abstract
We introduce GUI-360^circ, a large-scale, comprehensive dataset and benchmark suite designed to advance computer-using agents (CUAs). CUAs present unique challenges and is constrained by three persistent gaps: a scarcity of real-world CUA tasks, the lack of automated collection-and-annotation pipelines for multi-modal trajectories, and the absence of a unified benchmark that jointly evaluates GUI grounding, screen parsing, and action prediction. GUI-360^circ addresses these gaps with an LLM-augmented, largely automated pipeline for query sourcing, environment-template construction, task instantiation, batched execution, and LLM-driven quality filtering. The released corpus contains over 1.2M executed action steps across thousands of trajectories in popular Windows office applications, and includes full-resolution screenshots, accessibility metadata when available, instantiated goals, intermediate reasoning traces, and both successful and failed action trajectories. The dataset supports three canonical tasks, GUI grounding, screen parsing, and action prediction, and a hybrid GUI+API action space that reflects modern agent designs. Benchmarking state-of-the-art vision--language models on GUI-360^circ reveals substantial out-of-the-box shortcomings in grounding and action prediction; supervised fine-tuning and reinforcement learning yield significant gains but do not close the gap to human-level reliability. We release GUI-360^circ and accompanying code to facilitate reproducible research and accelerate progress on robust desktop CUAs. The full dataset has been made public on https://huggingface.co/datasets/vyokky/GUI-360.