Mobile-Agent-E: Self-Evolving Mobile Assistant for Complex Tasks
Zhenhailong Wang, Haiyang Xu, Junyang Wang, Xi Zhang, Ming Yan, Ji Zhang, Fei Huang, Heng Ji
2025-01-22

Summary
This paper talks about a new AI system called Mobile-Agent-E, which is designed to help people use their smartphones more easily for complex tasks. It's like having a super-smart assistant that learns from experience and gets better at helping you over time.
What's the problem?
Even though smartphones are a big part of our lives, doing complicated tasks on them can be frustrating and take a lot of time. Current AI assistants for phones aren't great at handling real-world problems, especially ones that need a lot of thinking or many steps. They also can't learn from their past experiences to get better.
What's the solution?
The researchers created Mobile-Agent-E, which is like a team of AI helpers working together. There's a manager AI that makes big-picture plans, and four other AIs that do specific jobs like looking at the screen, tapping buttons, checking for mistakes, and taking notes. What's really cool is that Mobile-Agent-E can remember useful tips and shortcuts from past tasks, just like how we learn from experience. They also made a new way to test how well it works, called Mobile-Eval-E, which includes tricky tasks that use multiple apps.
Why it matters?
This matters because it could make using smartphones much easier, especially for complicated tasks that usually give people headaches. Imagine having a phone assistant that gets smarter every time you use it, learning your habits and figuring out faster ways to do things. This could save people a lot of time and frustration. The researchers found that their system is 22% better than other top AI assistants, which is a big improvement. If this technology becomes widely available, it could change how we interact with our phones, making complex tasks as easy as asking a friend for help.
Abstract
Smartphones have become indispensable in modern life, yet navigating complex tasks on mobile devices often remains frustrating. Recent advancements in large multimodal model (LMM)-based mobile agents have demonstrated the ability to perceive and act in mobile environments. However, current approaches face significant limitations: they fall short in addressing real-world human needs, struggle with reasoning-intensive and long-horizon tasks, and lack mechanisms to learn and improve from prior experiences. To overcome these challenges, we introduce Mobile-Agent-E, a hierarchical multi-agent framework capable of self-evolution through past experience. By hierarchical, we mean an explicit separation of high-level planning and low-level action execution. The framework comprises a Manager, responsible for devising overall plans by breaking down complex tasks into subgoals, and four subordinate agents--Perceptor, Operator, Action Reflector, and Notetaker--which handle fine-grained visual perception, immediate action execution, error verification, and information aggregation, respectively. Mobile-Agent-E also features a novel self-evolution module which maintains a persistent long-term memory comprising Tips and Shortcuts. Tips are general guidance and lessons learned from prior tasks on how to effectively interact with the environment. Shortcuts are reusable, executable sequences of atomic operations tailored for specific subroutines. The inclusion of Tips and Shortcuts facilitates continuous refinement in performance and efficiency. Alongside this framework, we introduce Mobile-Eval-E, a new benchmark featuring complex mobile tasks requiring long-horizon, multi-app interactions. Empirical results show that Mobile-Agent-E achieves a 22% absolute improvement over previous state-of-the-art approaches across three foundation model backbones. Project page: https://x-plug.github.io/MobileAgent.