< Explain other AI papers

TRCE: Towards Reliable Malicious Concept Erasure in Text-to-Image Diffusion Models

Ruidong Chen, Honglin Guo, Lanjun Wang, Chenyu Zhang, Weizhi Nie, An-An Liu

2025-03-11

TRCE: Towards Reliable Malicious Concept Erasure in Text-to-Image
  Diffusion Models

Summary

This paper talks about TRCE, a method to stop AI image generators from creating harmful content (like inappropriate images) while keeping their ability to make normal pictures intact.

What's the problem?

Current AI image tools can still generate bad content even after safety tweaks, especially when users use tricky prompts or metaphors to bypass filters.

What's the solution?

TRCE uses a two-step fix: first, it rewrites tricky prompts to safer versions inside the AI’s system, then adjusts how the AI builds images step-by-step to avoid harmful results.

Why it matters?

This makes AI image tools safer for everyone by blocking bad content more reliably without breaking their ability to create useful or creative images.

Abstract

Recent advances in text-to-image diffusion models enable photorealistic image generation, but they also risk producing malicious content, such as NSFW images. To mitigate risk, concept erasure methods are studied to facilitate the model to unlearn specific concepts. However, current studies struggle to fully erase malicious concepts implicitly embedded in prompts (e.g., metaphorical expressions or adversarial prompts) while preserving the model's normal generation capability. To address this challenge, our study proposes TRCE, using a two-stage concept erasure strategy to achieve an effective trade-off between reliable erasure and knowledge preservation. Firstly, TRCE starts by erasing the malicious semantics implicitly embedded in textual prompts. By identifying a critical mapping objective(i.e., the [EoT] embedding), we optimize the cross-attention layers to map malicious prompts to contextually similar prompts but with safe concepts. This step prevents the model from being overly influenced by malicious semantics during the denoising process. Following this, considering the deterministic properties of the sampling trajectory of the diffusion model, TRCE further steers the early denoising prediction toward the safe direction and away from the unsafe one through contrastive learning, thus further avoiding the generation of malicious content. Finally, we conduct comprehensive evaluations of TRCE on multiple malicious concept erasure benchmarks, and the results demonstrate its effectiveness in erasing malicious concepts while better preserving the model's original generation ability. The code is available at: http://github.com/ddgoodgood/TRCE. CAUTION: This paper includes model-generated content that may contain offensive material.