Graph2Eval: Automatic Multimodal Task Generation for Agents via Knowledge Graphs
Yurun Chen, Xavier Hu, Yuhan Liu, Ziqi Wang, Zeyi Liao, Lin Chen, Feng Wei, Yuxi Qian, Bo Zheng, Keting Yin, Shengyu Zhang
2025-10-07
Summary
This paper introduces a new way to test how well AI agents—those that can think and act on their own—perform complex tasks that involve understanding information from different sources and interacting with websites.
What's the problem?
Currently, testing these AI agents is difficult because existing tests use fixed datasets that don't reflect the real world's constantly changing situations. Also, methods for creating test data using AI haven't been designed for agents that need to *do* things, like use tools or navigate websites. Existing automatic task generation focuses on simple analysis, not the multi-step interactions needed for web-based tasks.
What's the solution?
The researchers developed a system called Graph2Eval. It uses knowledge graphs—basically, networks of information—to automatically create challenging tasks. These tasks require the AI agent to understand documents, collaborate with others (or itself), and interact with websites. The system carefully filters these tasks to make sure they're solvable and make sense, and it can test different types of agents. They also created a dataset of over 1,300 tasks using this system, called Graph2Eval-Bench.
Why it matters?
This work is important because it provides a more realistic and thorough way to evaluate AI agents. By identifying where agents struggle with reasoning, teamwork, and web interaction, it helps researchers improve these systems and build more capable and reliable AI.
Abstract
As multimodal LLM-driven agents continue to advance in autonomy and generalization, evaluation based on static datasets can no longer adequately assess their true capabilities in dynamic environments and diverse tasks. Existing LLM-based synthetic data methods are largely designed for LLM training and evaluation, and thus cannot be directly applied to agent tasks that require tool use and interactive capabilities. While recent studies have explored automatic agent task generation with LLMs, most efforts remain limited to text or image analysis, without systematically modeling multi-step interactions in web environments. To address these challenges, we propose Graph2Eval, a knowledge graph-based framework that automatically generates both multimodal document comprehension tasks and web interaction tasks, enabling comprehensive evaluation of agents' reasoning, collaboration, and interactive capabilities. In our approach, knowledge graphs constructed from multi-source external data serve as the task space, where we translate semantic relations into structured multimodal tasks using subgraph sampling, task templates, and meta-paths. A multi-stage filtering pipeline based on node reachability, LLM scoring, and similarity analysis is applied to guarantee the quality and executability of the generated tasks. Furthermore, Graph2Eval supports end-to-end evaluation of multiple agent types (Single-Agent, Multi-Agent, Web Agent) and measures reasoning, collaboration, and interaction capabilities. We instantiate the framework with Graph2Eval-Bench, a curated dataset of 1,319 tasks spanning document comprehension and web interaction scenarios. Experiments show that Graph2Eval efficiently generates tasks that differentiate agent and model performance, revealing gaps in reasoning, collaboration, and web interaction across different settings and offering a new perspective for agent evaluation.