< Explain other AI papers

GuardReasoner: Towards Reasoning-based LLM Safeguards

Yue Liu, Hongcheng Gao, Shengfang Zhai, Jun Xia, Tianyi Wu, Zhiwei Xue, Yulin Chen, Kenji Kawaguchi, Jiaheng Zhang, Bryan Hooi

2025-01-31

GuardReasoner: Towards Reasoning-based LLM Safeguards

Summary

This paper talks about GuardReasoner, a new way to make AI language models safer by teaching them to think more carefully about what they're saying or doing. It's like giving AI a smart conscience that can reason through decisions.

What's the problem?

As AI language models are being used for more important tasks, it's crucial to make sure they don't say or do harmful things. Current safety measures, called guardrails, aren't always good enough. They can be too strict, blocking harmless content, or not strict enough, letting dangerous stuff slip through. It's like having a security guard who sometimes stops the wrong people or lets troublemakers pass.

What's the solution?

The researchers created GuardReasoner to solve this problem. First, they made a huge collection of examples (127,000) with detailed reasoning steps (460,000 in total) to teach AI how to think through safety decisions. Then, they used special training methods to help the AI learn from these examples and get better at handling tricky situations. It's like teaching the AI to be a thoughtful judge rather than just a strict bouncer.

Why it matters?

This matters because it could make AI systems much safer and more trustworthy. In tests, GuardReasoner did better than other top safety systems, including some made by big tech companies. It was able to catch more harmful content while also letting more good content through. This could help make AI useful for important jobs without worrying as much about mistakes or misuse. By making the research public, the team is helping other scientists improve AI safety too, which could lead to smarter, safer AI for everyone to use.

Abstract

As LLMs increasingly impact safety-critical applications, ensuring their safety using guardrails remains a key challenge. This paper proposes GuardReasoner, a new safeguard for LLMs, by guiding the guard model to learn to reason. Concretely, we first create the GuardReasonerTrain dataset, which consists of 127K samples with 460K detailed reasoning steps. Then, we introduce reasoning SFT to unlock the reasoning capability of guard models. In addition, we present hard sample DPO to further strengthen their reasoning ability. In this manner, GuardReasoner achieves better performance, explainability, and generalizability. Extensive experiments and analyses on 13 benchmarks of 3 guardrail tasks demonstrate its superiority. Remarkably, GuardReasoner 8B surpasses GPT-4o+CoT by 5.74% and LLaMA Guard 3 8B by 20.84% F1 score on average. We release the training data, code, and models with different scales (1B, 3B, 8B) of GuardReasoner : https://github.com/yueliu1999/GuardReasoner/.