< Explain other AI papers

DynaGuard: A Dynamic Guardrail Model With User-Defined Policies

Monte Hoover, Vatsal Baherwani, Neel Jain, Khalid Saifullah, Joseph Vincent, Chirag Jain, Melissa Kazemi Rad, C. Bayan Bruss, Ashwinee Panda, Tom Goldstein

2025-09-03

DynaGuard: A Dynamic Guardrail Model With User-Defined Policies

Summary

This paper introduces a new type of 'guardian' model for chatbots, which are systems designed to make sure the chatbot doesn't say harmful or inappropriate things.

What's the problem?

Current guardian models are pretty basic; they can only detect a set list of bad behaviors that someone has already programmed them to look for. This is a problem because different applications, like a chatbot for a specific company or a game, might have unique rules about what's acceptable, and these standard models can't adapt to those specific needs. They aren't flexible enough to handle new or custom policies.

What's the solution?

The researchers created 'dynamic guardian models' that can be given a set of rules – a policy – and then decide if a chatbot's response breaks those rules. These models can work quickly to flag obvious violations, or they can 'think through' their reasoning and explain *why* a response is problematic. Importantly, they perform just as well as the standard models at detecting common harms, but they also excel at identifying violations of these custom, free-form policies, and they do it much faster than other advanced reasoning models.

Why it matters?

This is important because it allows for much more control over chatbots. Instead of being limited to pre-defined safety measures, developers can tailor the guardian model to the specific context of their application, ensuring the chatbot behaves appropriately and follows the rules they set. This makes chatbots safer and more useful in a wider range of situations.

Abstract

Guardian models are used to supervise and moderate the outputs of user-facing chatbots, enforcing guardrails and detecting bad behaviors. Standard guardian models like LlamaGuard detect predefined, static categories of harms. We propose dynamic guardian models that evaluate text based on user-defined policies, making them useful for different application domains that are not addressed by standard guardian models. Our dynamic guardian models can be used for fast detection of policy violations or with chain-of-thought reasoning that articulates and justifies the model outputs. Our dynamic guardian models match static models in detection accuracy for static harm categories while identifying violations of free-form policies with accuracy comparable to frontier reasoning models in a fraction of the time.