DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails
Yihe Deng, Yu Yang, Junkai Zhang, Wei Wang, Bo Li
2025-02-10
Summary
This paper talks about DuoGuard, a new system that helps make AI language models safer and more responsible across different languages. It uses a clever two-player game approach to create better safety checks for these models.
What's the problem?
As AI language models get more advanced, there's a growing need to make sure they don't produce harmful or illegal content. While there's a lot of safety data for English, there isn't much for other languages. This makes it hard to create good safety measures for AI models in multiple languages.
What's the solution?
The researchers created DuoGuard, which uses two AI players in a game-like setup. One player tries to generate tricky, potentially unsafe content, while the other tries to detect it. As they compete, both get better at their jobs. This process creates high-quality fake data that helps train safety systems for multiple languages. DuoGuard performs better than existing methods, is faster, and works well even with a smaller model.
Why it matters?
This matters because it makes AI language models safer and more reliable in many languages, not just English. It helps prevent these models from producing harmful content in different cultures and languages. By creating fake data to train on, it solves the problem of not having enough real safety data in many languages. This could lead to more responsible AI use worldwide and help make advanced language technology available to more people safely.
Abstract
The rapid advancement of large language models (LLMs) has increased the need for guardrail models to ensure responsible use, particularly in detecting unsafe and illegal content. While substantial safety data exist in English, multilingual guardrail modeling remains underexplored due to the scarcity of open-source safety data in other languages. To address this gap, we propose a novel two-player Reinforcement Learning (RL) framework, where a generator and a guardrail model co-evolve adversarially to produce high-quality synthetic data for multilingual guardrail training. We theoretically formalize this interaction as a two-player game, proving convergence to a Nash equilibrium. Empirical evaluations show that our model \ours outperforms state-of-the-art models, achieving nearly 10% improvement over LlamaGuard3 (8B) on English benchmarks while being 4.5x faster at inference with a significantly smaller model (0.5B). We achieve substantial advancements in multilingual safety tasks, particularly in addressing the imbalance for lower-resource languages in a collected real dataset. Ablation studies emphasize the critical role of <PRE_TAG>synthetic data generation</POST_TAG> in bridging the imbalance in open-source data between English and other languages. These findings establish a scalable and efficient approach to <PRE_TAG>synthetic data generation</POST_TAG>, paving the way for improved multilingual guardrail models to enhance LLM safety. Code, model, and data will be open-sourced at https://github.com/yihedeng9/DuoGuard.