< Explain other AI papers

OSWorld-MCP: Benchmarking MCP Tool Invocation In Computer-Use Agents

Hongrui Jia, Jitong Liao, Xi Zhang, Haiyang Xu, Tianbao Xie, Chaoya Jiang, Ming Yan, Si Liu, Wei Ye, Fei Huang

2025-10-29

OSWorld-MCP: Benchmarking MCP Tool Invocation In Computer-Use Agents

Summary

This paper introduces a new way to test how well AI agents can use computers, going beyond just clicking buttons and focusing on their ability to use software tools like a human would.

What's the problem?

Currently, testing AI agents mostly looks at how well they can interact with a computer's graphical user interface (GUI) – things like clicking buttons and filling out forms. However, real computer use often involves using different software tools to get things done. Existing tests don't fairly evaluate an agent's ability to *use* these tools, making it hard to know if an agent is truly capable or just good at mimicking mouse clicks.

What's the solution?

The researchers created a testing environment called OSWorld-MCP. This environment includes a collection of 158 different software tools, covering common applications, and tasks that require using both the GUI *and* these tools. They then tested several state-of-the-art AI agents on these tasks to see how well they could utilize the tools to complete them. They also built a system to automatically create new tools for testing.

Why it matters?

This work is important because it provides a more realistic and challenging way to evaluate AI agents. The results show that giving agents access to tools can significantly improve their performance, but even the best agents still struggle to use them effectively. This highlights the need for further research into how to build AI agents that can seamlessly integrate and utilize software tools, bringing us closer to truly helpful AI assistants.

Abstract

With advances in decision-making and reasoning capabilities, multimodal agents show strong potential in computer application scenarios. Past evaluations have mainly assessed GUI interaction skills, while tool invocation abilities, such as those enabled by the Model Context Protocol (MCP), have been largely overlooked. Comparing agents with integrated tool invocation to those evaluated only on GUI interaction is inherently unfair. We present OSWorld-MCP, the first comprehensive and fair benchmark for assessing computer-use agents' tool invocation, GUI operation, and decision-making abilities in a real-world environment. We design a novel automated code-generation pipeline to create tools and combine them with a curated selection from existing tools. Rigorous manual validation yields 158 high-quality tools (covering 7 common applications), each verified for correct functionality, practical applicability, and versatility. Extensive evaluations of state-of-the-art multimodal agents on OSWorld-MCP show that MCP tools generally improve task success rates (e.g., from 8.3% to 20.4% for OpenAI o3 at 15 steps, from 40.1% to 43.3% for Claude 4 Sonnet at 50 steps), underscoring the importance of assessing tool invocation capabilities. However, even the strongest models have relatively low tool invocation rates, Only 36.3%, indicating room for improvement and highlighting the benchmark's challenge. By explicitly measuring MCP tool usage skills, OSWorld-MCP deepens understanding of multimodal agents and sets a new standard for evaluating performance in complex, tool-assisted environments. Our code, environment, and data are publicly available at https://osworld-mcp.github.io.