< Explain other AI papers

Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents

Bowen Ye, Rang Li, Qibin Yang, Yuanxin Liu, Linli Yao, Hanglong Lv, Zhihui Xie, Chenxin An, Lei Li, Lingpeng Kong, Qi Liu, Zhifang Sui, Tong Yang

2026-04-08

Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents

Summary

This paper introduces a new way to test how well artificial intelligence agents, specifically those powered by large language models, can perform complex tasks in real-world situations.

What's the problem?

Currently, testing these AI agents has some big flaws. Most tests only look at the final result, ignoring *how* the agent got there. They also don't thoroughly check if the agent is safe and reliable, and they usually focus on only one type of input, like text, instead of things like images or videos. This makes it hard to know if an agent is truly capable or just got lucky, and if it will behave predictably and safely when used in the real world.

What's the solution?

The researchers created a comprehensive evaluation suite called Claw-Eval. It includes 300 different tasks that cover a wide range of abilities, like managing software, understanding images and videos, and having conversations. Importantly, Claw-Eval doesn't just check the final answer; it records every step the agent takes, allowing for a much more detailed assessment of its performance, safety, and consistency. They tested 14 different AI models using this new system.

Why it matters?

This work is important because it provides a more reliable and thorough way to evaluate AI agents. The tests revealed that existing methods often miss safety issues and inconsistencies. By highlighting these weaknesses, Claw-Eval can guide developers in building AI agents that are not only powerful but also dependable and safe for real-world use. It shows where current models struggle, particularly with video understanding, and points the way towards improvements.

Abstract

Large language models are increasingly deployed as autonomous agents executing multi-step workflows in real-world software environments. However, existing agent benchmarks suffer from three critical limitations: (1) trajectory-opaque grading that checks only final outputs, (2) underspecified safety and robustness evaluation, and (3) narrow modality coverage and interaction paradigms. We introduce Claw-Eval, an end-to-end evaluation suite addressing all three gaps. It comprises 300 human-verified tasks spanning 9 categories across three groups (general service orchestration, multimodal perception and generation, and multi-turn professional dialogue). Every agent action is recorded through three independent evidence channels (execution traces, audit logs, and environment snapshots), enabling trajectory-aware grading over 2,159 fine-grained rubric items. The scoring protocol evaluates Completion, Safety, and Robustness, reporting Average Score, Pass@k, and Pass^k across three trials to distinguish genuine capability from lucky outcomes. Experiments on 14 frontier models reveal that: (1) trajectory-opaque evaluation is systematically unreliable, missing 44% of safety violations and 13% of robustness failures that our hybrid pipeline catches; (2) controlled error injection primarily degrades consistency rather than peak capability, with Pass^3 dropping up to 24% while Pass@3 remains stable; (3) multimodal performance varies sharply, with most models performing poorer on video than on document or image, and no single model dominating across all modalities. Beyond benchmarking, Claw-Eval highlights actionable directions for agent development, shedding light on what it takes to build agents that are not only capable but reliably deployable.