Taxonomy-Adaptive Moderation Model with Robust Guardrails for Large Language Models
Mahesh Kumar Nandwana, Youngwan Lim, Joseph Liu, Alex Yang, Varun Notibala, Nishchaie Khanna
2025-12-08
Summary
This paper introduces Roblox Guard 1.0, a new Large Language Model (LLM) designed to make other LLMs safer to use by carefully checking both what users ask and what the LLM responds with.
What's the problem?
Even after developers try to make LLMs safe, they can still sometimes produce harmful or inappropriate responses, which could be risky for people using them. Existing safety measures aren't always enough to catch everything, meaning there's a need for better ways to protect users from potentially bad outputs.
What's the solution?
The researchers created Roblox Guard 1.0, which is built on an existing LLM called Llama-3.1-8B-Instruct and then further trained specifically to identify unsafe content. It uses a series of LLMs working together to improve its ability to moderate content. They also used a special training technique involving reasoning steps and looking at problems 'backwards' to help the model understand context better. To help others test these kinds of safety systems, they also released a new set of tests called RobloxGuard-Eval.
Why it matters?
This work is important because it provides a more robust way to filter out harmful content from LLMs, making them more reliable and safer for everyone to use. The new testing benchmark also helps researchers and developers improve and compare different safety systems, ultimately leading to better and more responsible AI.
Abstract
Large Language Models (LLMs) are typically aligned for safety during the post-training phase; however, they may still generate inappropriate outputs that could potentially pose risks to users. This challenge underscores the need for robust safeguards that operate across both model inputs and outputs. In this work, we introduce Roblox Guard 1.0, a state-of-the-art instruction fine-tuned LLM designed to enhance the safety of LLM systems through comprehensive input-output moderation, using a pipeline of LLMs to enhance moderation capability. Built on the Llama-3.1-8B-Instruct backbone, our model is instruction fine-tuned to generalize across previously unseen safety taxonomies and demonstrates strong performance on out-of-domain safety benchmarks. The instruction fine-tuning process uses a mix of synthetic and open-source safety datasets, augmented with chain-of-thought (CoT) rationales and input inversion to enhance contextual understanding and decision making. To support systematic evaluation, we also release RobloxGuard-Eval, a new benchmark featuring an extensible safety taxonomy to assess the effectiveness of LLM guardrails and moderation frameworks.