< Explain other AI papers

Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents

Boyu Gou, Ruohan Wang, Boyuan Zheng, Yanan Xie, Cheng Chang, Yiheng Shu, Huan Sun, Yu Su

2024-10-08

Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents

Summary

This paper discusses a new model called UGround that improves how graphical user interface (GUI) agents understand and interact with digital environments by using visual information instead of relying on text-based data.

What's the problem?

Current GUI agents often use text-based representations, like HTML, to understand the elements on a screen. While this method is useful, it can create problems such as noise, incomplete information, and increased processing time. These issues make it harder for agents to accurately navigate and perform tasks in real-world applications, where the environment is complex and dynamic.

What's the solution?

To solve these problems, the authors propose UGround, a model that allows GUI agents to perceive their environment visually, similar to how humans do. This model uses visual grounding techniques to map different GUI elements to their positions on the screen. The researchers created a large dataset containing 10 million GUI elements from over 1.3 million screenshots to train UGround effectively. They also made some adjustments to an existing model architecture (LLaVA) to enhance its performance. The results showed that UGround outperforms other models by up to 20% in identifying GUI elements and can operate effectively without relying on additional text-based input.

Why it matters?

This research is important because it demonstrates a more efficient way for GUI agents to interact with digital environments by using visual perception. By improving how these agents understand and navigate interfaces, UGround could lead to better automation tools for various applications, making technology more accessible and user-friendly for everyone.

Abstract

Multimodal large language models (MLLMs) are transforming the capabilities of graphical user interface (GUI) agents, facilitating their transition from controlled simulations to complex, real-world applications across various platforms. However, the effectiveness of these agents hinges on the robustness of their grounding capability. Current GUI agents predominantly utilize text-based representations such as HTML or accessibility trees, which, despite their utility, often introduce noise, incompleteness, and increased computational overhead. In this paper, we advocate a human-like embodiment for GUI agents that perceive the environment entirely visually and directly take pixel-level operations on the GUI. The key is visual grounding models that can accurately map diverse referring expressions of GUI elements to their coordinates on the GUI across different platforms. We show that a simple recipe, which includes web-based synthetic data and slight adaptation of the LLaVA architecture, is surprisingly effective for training such visual grounding models. We collect the largest dataset for GUI visual grounding so far, containing 10M GUI elements and their referring expressions over 1.3M screenshots, and use it to train UGround, a strong universal visual grounding model for GUI agents. Empirical results on six benchmarks spanning three categories (grounding, offline agent, and online agent) show that 1) UGround substantially outperforms existing visual grounding models for GUI agents, by up to 20% absolute, and 2) agents with UGround outperform state-of-the-art agents, despite the fact that existing agents use additional text-based input while ours only uses visual perception. These results provide strong support for the feasibility and promises of GUI agents that navigate the digital world as humans do.