Large Language Model-Brained GUI Agents: A Survey
Chaoyun Zhang, Shilin He, Jiaxu Qian, Bowen Li, Liqun Li, Si Qin, Yu Kang, Minghua Ma, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
2024-11-28

Summary
This paper discusses Large Language Model (LLM)-brained GUI agents, which are intelligent systems that use advanced language models to help users interact with software more easily and efficiently.
What's the problem?
Traditional ways of interacting with software through Graphical User Interfaces (GUIs) can be complicated and require users to learn specific commands or scripts. This makes it difficult for many people to perform tasks quickly or efficiently, especially when dealing with complex applications.
What's the solution?
The authors explore how LLMs can be used to create GUI agents that understand natural language. These agents can interpret what users say and then perform actions in the software without needing complex instructions. The paper surveys the history of these agents, their key components, and the techniques used to develop them. It also addresses how to train these agents effectively and evaluate their performance.
Why it matters?
This research is important because it represents a significant advancement in how we interact with technology. By enabling users to give simple verbal commands to control software, LLM-brained GUI agents can make technology more accessible and user-friendly. This could lead to more efficient workflows in various fields, from web browsing to mobile applications.
Abstract
GUIs have long been central to human-computer interaction, providing an intuitive and visually-driven way to access and interact with digital systems. The advent of LLMs, particularly multimodal models, has ushered in a new era of GUI automation. They have demonstrated exceptional capabilities in natural language understanding, code generation, and visual processing. This has paved the way for a new generation of LLM-brained GUI agents capable of interpreting complex GUI elements and autonomously executing actions based on natural language instructions. These agents represent a paradigm shift, enabling users to perform intricate, multi-step tasks through simple conversational commands. Their applications span across web navigation, mobile app interactions, and desktop automation, offering a transformative user experience that revolutionizes how individuals interact with software. This emerging field is rapidly advancing, with significant progress in both research and industry. To provide a structured understanding of this trend, this paper presents a comprehensive survey of LLM-brained GUI agents, exploring their historical evolution, core components, and advanced techniques. We address research questions such as existing GUI agent frameworks, the collection and utilization of data for training specialized GUI agents, the development of large action models tailored for GUI tasks, and the evaluation metrics and benchmarks necessary to assess their effectiveness. Additionally, we examine emerging applications powered by these agents. Through a detailed analysis, this survey identifies key research gaps and outlines a roadmap for future advancements in the field. By consolidating foundational knowledge and state-of-the-art developments, this work aims to guide both researchers and practitioners in overcoming challenges and unlocking the full potential of LLM-brained GUI agents.