Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique
Tej Deep Pala, Vernon Y. H. Toh, Rishabh Bhardwaj, Soujanya Poria
2024-08-21

Summary
This paper discusses Ferret, a new method for improving automated red teaming, which is a way to test the security of AI models by simulating attacks.
What's the problem?
As large language models (LLMs) are used more widely, ensuring their safety is essential. Current methods for testing these models often take too long, lack variety in their attack types, and require a lot of computing power. This makes it difficult to effectively identify and fix vulnerabilities in these models.
What's the solution?
Ferret improves upon existing methods by generating multiple adversarial prompts (which are like simulated attacks) quickly in each round of testing. It uses a scoring system to evaluate which prompts are most effective at challenging the model. By exploring different scoring functions, Ferret can efficiently find harmful prompts that expose weaknesses in the AI models. The results show that Ferret achieves a high success rate in attacks and reduces the time needed for testing compared to previous methods.
Why it matters?
This research is important because it enhances the ability to secure AI systems by making automated testing faster and more effective. By improving how we can identify weaknesses in language models, Ferret helps ensure that these technologies are safer for real-world applications, which is crucial as they become more integrated into everyday life.
Abstract
In today's era, where large language models (LLMs) are integrated into numerous real-world applications, ensuring their safety and robustness is crucial for responsible AI usage. Automated red-teaming methods play a key role in this process by generating adversarial attacks to identify and mitigate potential vulnerabilities in these models. However, existing methods often struggle with slow performance, limited categorical diversity, and high resource demands. While Rainbow Teaming, a recent approach, addresses the diversity challenge by framing adversarial prompt generation as a quality-diversity search, it remains slow and requires a large fine-tuned mutator for optimal performance. To overcome these limitations, we propose Ferret, a novel approach that builds upon Rainbow Teaming by generating multiple adversarial prompt mutations per iteration and using a scoring function to rank and select the most effective adversarial prompt. We explore various scoring functions, including reward models, Llama Guard, and LLM-as-a-judge, to rank adversarial mutations based on their potential harm to improve the efficiency of the search for harmful mutations. Our results demonstrate that Ferret, utilizing a reward model as a scoring function, improves the overall attack success rate (ASR) to 95%, which is 46% higher than Rainbow Teaming. Additionally, Ferret reduces the time needed to achieve a 90% ASR by 15.2% compared to the baseline and generates adversarial prompts that are transferable i.e. effective on other LLMs of larger size. Our codes are available at https://github.com/declare-lab/ferret.