DaMo: Data Mixing Optimizer in Fine-tuning Multimodal LLMs for Mobile Phone Agents
Kai Shi, Jun Yang, Ni Yang, Binqiang Pan, Qingsong Xie, Chao Zhang, Zhenyu Yang, Tianhuang Su, Haonan Lu
2025-10-23
Summary
This paper introduces a new method, DaMo, to improve how well AI models – specifically those designed to act like mobile phone assistants – can handle multiple tasks at once. It also presents a new testing ground, PhoneAgentBench, specifically for evaluating these kinds of AI assistants.
What's the problem?
Current AI models that try to be mobile phone assistants, built on large language models, aren't great at doing many different things on your phone simultaneously. A common way to train these models is to show them examples of many tasks, but figuring out the *best* mix of examples to use for training is really hard. Simply throwing everything at the model doesn't guarantee the best results, and finding the right balance is a challenge.
What's the solution?
The researchers created DaMo, which is essentially another AI network that *predicts* the best combination of training data. Instead of randomly trying different mixes, DaMo learns to forecast how well the main AI model will perform on different tasks based on the data it's trained on. They tested DaMo using a new benchmark called PhoneAgentBench, which includes over a thousand questions and tasks related to real-world phone use. DaMo consistently outperformed other methods in predicting the optimal data mix and improving overall performance.
Why it matters?
This work is important because it makes mobile phone AI assistants more capable and efficient. By automatically finding the best way to train these models, DaMo can lead to assistants that are better at handling a wider range of tasks, like answering questions from screenshots, controlling apps, and generally making your phone experience smoother. The new PhoneAgentBench also provides a standardized way to measure progress in this field, allowing researchers to compare different approaches more effectively.
Abstract
Mobile Phone Agents (MPAs) have emerged as a promising research direction due to their broad applicability across diverse scenarios. While Multimodal Large Language Models (MLLMs) serve as the foundation for MPAs, their effectiveness in handling multiple mobile phone tasks simultaneously remains limited. Although multitask supervised fine-tuning (SFT) is widely adopted for multitask learning, existing approaches struggle to determine optimal training data compositions for peak performance. To address this challenge, we propose DaMo (Data Mixture Optimizer) - a novel solution employing a trainable network that predicts optimal data mixtures by forecasting downstream task performance for any given dataset ratio. To support comprehensive evaluation, we introduce PhoneAgentBench, the first specialized benchmark to evaluate MLLMs on multimodal mobile phone tasks, comprising 1235 QA pairs spanning diverse real-world industrial mobile application scenarios. Demonstrating strong predictive capability (R^2=0.81) in small-scale pilot experiments, DaMo efficiently extrapolates optimal data mixing configurations. Our results show DaMo achieves a 3.38% performance improvement on PhoneAgentBench compared to alternative methods. Furthermore, extensive experiments across established benchmarks including BFCL-v3, MME-Reasoning, MME-Perception, and OCRBench reveal DaMo's superior generalization, outperforming other approaches by 2.57% in terms of average score. When used solely for MLLM optimization on the BFCL-v3 task, DaMo improves the metrics by 12.47% than other methods. Notably, DaMo maintains robust scalability, preserving its effectiveness when applied to other model architectures. The code and dataset are available at https://github.com/OPPO-Mente-Lab/DaMo.git