Personalized Safety Alignment for Text-to-Image Diffusion Models
Yu Lei, Jinbin Bai, Qingyu Shi, Aosong Feng, Kaidong Yu
2025-08-05
Summary
This paper talks about a new way to make text-to-image generation models safer by customizing how they handle content based on each user's personal safety preferences.
What's the problem?
The problem is that current safety systems treat all users the same, ignoring that people have different ideas about what content is safe or harmful depending on their age, beliefs, and mental health, which can cause problems or unwanted results.
What's the solution?
The paper introduces Personalized Safety Alignment, a framework that uses specific user profiles to guide the model during image generation, adjusting its behavior to fit individual safety needs without reducing the quality of the images.
Why it matters?
This matters because it helps make generative AI tools more respectful and adaptable to different users, ensuring that the images they create align better with what each person considers safe and appropriate.
Abstract
A personalized safety alignment framework integrates user-specific profiles into text-to-image diffusion models to better align generated content with individual safety preferences.