WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs
Seungju Han, Kavel Rao, Allyson Ettinger, Liwei Jiang, Bill Yuchen Lin, Nathan Lambert, Yejin Choi, Nouha Dziri
2024-06-27

Summary
This paper introduces WildGuard, a new tool designed to help monitor and ensure the safe use of large language models (LLMs). It focuses on identifying harmful user prompts, detecting risks in the model's responses, and evaluating how well the model refuses inappropriate requests.
What's the problem?
As AI language models become more popular, there are growing concerns about their potential misuse. These models can sometimes generate harmful or misleading content if not properly moderated. Existing moderation tools often struggle to identify complex issues like 'jailbreaks' (ways to trick the model into ignoring its safety guidelines) and may not effectively evaluate how well the model can refuse unsafe requests. This can lead to serious safety risks in applications where these models are used.
What's the solution?
To tackle these challenges, the authors developed WildGuard, which includes a new dataset called WildGuardMix. This dataset contains 92,000 examples that cover a wide range of scenarios, including both straightforward and tricky prompts that could lead to harmful outputs. WildGuard is designed to automatically detect malicious intent in user prompts, assess the safety of the model's responses, and measure how often the model refuses inappropriate requests. The results show that WildGuard performs better than other existing tools and even matches or exceeds the performance of advanced models like GPT-4 in various tests.
Why it matters?
This research is important because it provides a comprehensive solution for moderating AI language models, ensuring they are used safely and responsibly. By improving how these models detect harmful content and refuse inappropriate requests, WildGuard can help maintain trust in AI technologies and make them safer for use in sensitive areas like healthcare, finance, and social media.
Abstract
We introduce WildGuard -- an open, light-weight moderation tool for LLM safety that achieves three goals: (1) identifying malicious intent in user prompts, (2) detecting safety risks of model responses, and (3) determining model refusal rate. Together, WildGuard serves the increasing needs for automatic safety moderation and evaluation of LLM interactions, providing a one-stop tool with enhanced accuracy and broad coverage across 13 risk categories. While existing open moderation tools such as Llama-Guard2 score reasonably well in classifying straightforward model interactions, they lag far behind a prompted GPT-4, especially in identifying adversarial jailbreaks and in evaluating models' refusals, a key measure for evaluating safety behaviors in model responses. To address these challenges, we construct WildGuardMix, a large-scale and carefully balanced multi-task safety moderation dataset with 92K labeled examples that cover vanilla (direct) prompts and adversarial jailbreaks, paired with various refusal and compliance responses. WildGuardMix is a combination of WildGuardTrain, the training data of WildGuard, and WildGuardTest, a high-quality human-annotated moderation test set with 5K labeled items covering broad risk scenarios. Through extensive evaluations on WildGuardTest and ten existing public benchmarks, we show that WildGuard establishes state-of-the-art performance in open-source safety moderation across all the three tasks compared to ten strong existing open-source moderation models (e.g., up to 26.4% improvement on refusal detection). Importantly, WildGuard matches and sometimes exceeds GPT-4 performance (e.g., up to 3.9% improvement on prompt harmfulness identification). WildGuard serves as a highly effective safety moderator in an LLM interface, reducing the success rate of jailbreak attacks from 79.8% to 2.4%.