Efficient Detection of Toxic Prompts in Large Language Models
Yi Liu, Junzhe Yu, Huijia Sun, Ling Shi, Gelei Deng, Yuqi Chen, Yang Liu
2024-08-27

Summary
This paper discusses a new method called ToxicDetector, which helps identify harmful prompts used in large language models (LLMs) like ChatGPT to prevent them from generating toxic or unethical responses.
What's the problem?
Large language models can be manipulated by people who use harmful prompts to make the models produce inappropriate or dangerous content. Existing methods for detecting these toxic prompts struggle with the variety and complexity of such prompts, making it hard to keep the models safe and reliable.
What's the solution?
The authors developed ToxicDetector, a lightweight system that efficiently detects toxic prompts without needing extensive training. It uses existing language models to create examples of toxic prompts and then employs a Multi-Layer Perceptron (MLP) classifier to identify harmful content. ToxicDetector has been tested on various models and shows high accuracy in detecting toxic prompts while processing them quickly, making it suitable for real-time applications.
Why it matters?
This research is important because it enhances the safety of AI systems by preventing them from producing harmful content. As AI becomes more integrated into everyday life, having reliable methods to filter out toxic prompts is crucial for ethical AI usage and protecting users from negative experiences.
Abstract
Large language models (LLMs) like ChatGPT and Gemini have significantly advanced natural language processing, enabling various applications such as chatbots and automated content generation. However, these models can be exploited by malicious individuals who craft toxic prompts to elicit harmful or unethical responses. These individuals often employ jailbreaking techniques to bypass safety mechanisms, highlighting the need for robust toxic prompt detection methods. Existing detection techniques, both blackbox and whitebox, face challenges related to the diversity of toxic prompts, scalability, and computational efficiency. In response, we propose ToxicDetector, a lightweight greybox method designed to efficiently detect toxic prompts in LLMs. ToxicDetector leverages LLMs to create toxic concept prompts, uses embedding vectors to form feature vectors, and employs a Multi-Layer Perceptron (MLP) classifier for prompt classification. Our evaluation on various versions of the LLama models, Gemma-2, and multiple datasets demonstrates that ToxicDetector achieves a high accuracy of 96.39\% and a low false positive rate of 2.00\%, outperforming state-of-the-art methods. Additionally, ToxicDetector's processing time of 0.0780 seconds per prompt makes it highly suitable for real-time applications. ToxicDetector achieves high accuracy, efficiency, and scalability, making it a practical method for toxic prompt detection in LLMs.