< Explain other AI papers

ShieldAgent: Shielding Agents via Verifiable Safety Policy Reasoning

Zhaorun Chen, Mintong Kang, Bo Li

2025-04-07

ShieldAgent: Shielding Agents via Verifiable Safety Policy Reasoning

Summary

This paper talks about ShieldAgent, a safety guard for AI helpers that checks if their actions follow rules using logic and verification tools, like a digital police officer for robots.

What's the problem?

AI helpers can be tricked into doing harmful things (like sharing private info) because current safety checks don’t work well for their complex, real-world tasks.

What's the solution?

ShieldAgent turns rulebooks into easy-to-check logic circuits, uses special tools to verify each action step-by-step, and tests everything with a new safety exam (ShieldAgent-Bench).

Why it matters?

It stops AI helpers from causing harm in apps like banking or healthcare, making them safer and faster at catching risks without slowing down their work.

Abstract

Autonomous agents powered by foundation models have seen widespread adoption across various real-world applications. However, they remain highly vulnerable to malicious instructions and attacks, which can result in severe consequences such as privacy breaches and financial losses. More critically, existing guardrails for LLMs are not applicable due to the complex and dynamic nature of agents. To tackle these challenges, we propose ShieldAgent, the first guardrail agent designed to enforce explicit safety policy compliance for the action trajectory of other protected agents through logical reasoning. Specifically, ShieldAgent first constructs a safety policy model by extracting verifiable rules from policy documents and structuring them into a set of action-based probabilistic rule circuits. Given the action trajectory of the protected agent, ShieldAgent retrieves relevant rule circuits and generates a shielding plan, leveraging its comprehensive tool library and executable code for formal verification. In addition, given the lack of guardrail benchmarks for agents, we introduce ShieldAgent-Bench, a dataset with 3K safety-related pairs of agent instructions and action trajectories, collected via SOTA attacks across 6 web environments and 7 risk categories. Experiments show that ShieldAgent achieves SOTA on ShieldAgent-Bench and three existing benchmarks, outperforming prior methods by 11.3% on average with a high recall of 90.1%. Additionally, ShieldAgent reduces API queries by 64.7% and inference time by 58.2%, demonstrating its high precision and efficiency in safeguarding agents.