<think> So let's replace this phrase with insult... </think> Lessons learned from generation of toxic texts with LLMs
Sergey Pletenev, Daniil Moskovskiy, Alexander Panchenko
2025-09-11
Summary
This paper investigates whether artificial intelligence, specifically large language models, can create realistic toxic text to help train other AI systems to identify and remove harmful language online.
What's the problem?
Currently, building AI that can 'detoxify' text – meaning remove offensive or harmful content – relies heavily on humans labeling what is toxic. This is slow, expensive, and can be emotionally difficult for the people doing the labeling. The question is whether AI can generate enough realistic toxic examples to replace some of this human effort, but the paper finds that current AI struggles to do this well.
What's the solution?
Researchers used two powerful language models, Llama 3 and Qwen, to create synthetic toxic text based on existing neutral text from two datasets. They then trained separate AI models to detoxify text, some using the AI-generated toxic data and others using human-labeled toxic data. They compared how well each model performed at identifying and removing toxic language.
Why it matters?
The results showed that AI trained on AI-generated toxic text performed significantly worse – up to 30% less effectively – than AI trained on human-labeled data. This is because the AI tends to use a limited and repetitive set of insults, failing to capture the full range and complexity of how humans express toxicity. This highlights that while AI is good at *creating* text, it’s not yet good enough at creating *realistic* toxic text, and human-labeled data remains crucial for building effective and robust systems to combat online harm.
Abstract
Modern Large Language Models (LLMs) are excellent at generating synthetic data. However, their performance in sensitive domains such as text detoxification has not received proper attention from the scientific community. This paper explores the possibility of using LLM-generated synthetic toxic data as an alternative to human-generated data for training models for detoxification. Using Llama 3 and Qwen activation-patched models, we generated synthetic toxic counterparts for neutral texts from ParaDetox and SST-2 datasets. Our experiments show that models fine-tuned on synthetic data consistently perform worse than those trained on human data, with a drop in performance of up to 30% in joint metrics. The root cause is identified as a critical lexical diversity gap: LLMs generate toxic content using a small, repetitive vocabulary of insults that fails to capture the nuances and variety of human toxicity. These findings highlight the limitations of current LLMs in this domain and emphasize the continued importance of diverse, human-annotated data for building robust detoxification systems.