< Explain other AI papers

Reverse Personalization

Han-Wei Kung, Tuomas Varanka, Nicu Sebe

2025-12-30

Reverse Personalization

Summary

This paper focuses on a new way to remove or change identifying features in facial images created by AI, while still allowing control over other characteristics like hair color or age.

What's the problem?

Current AI methods for changing faces in images rely on either the person already being well-known to the AI model, or require retraining the AI specifically for that person. This is a problem because it limits who can be 'anonymized' and is a lot of extra work. Existing methods also don't give you much control over *how* the face is changed, just that it's different.

What's the solution?

The researchers developed a technique called 'reverse personalization'. Instead of using text prompts to tell the AI what to change, they directly manipulate the image itself using a process called conditional diffusion inversion. They also added a way to guide the changes based on the person's identity, so it works even for people the AI hasn't 'seen' before. This allows for controlled changes to facial attributes during the anonymization process.

Why it matters?

This research is important because it provides a more flexible and effective way to protect people's privacy in AI-generated images. It allows for removing identifying features without needing specific training data for each person, and it gives users control over how the face is altered, leading to better and more useful anonymization techniques.

Abstract

Recent text-to-image diffusion models have demonstrated remarkable generation of realistic facial images conditioned on textual prompts and human identities, enabling creating personalized facial imagery. However, existing prompt-based methods for removing or modifying identity-specific features rely either on the subject being well-represented in the pre-trained model or require model fine-tuning for specific identities. In this work, we analyze the identity generation process and introduce a reverse personalization framework for face anonymization. Our approach leverages conditional diffusion inversion, allowing direct manipulation of images without using text prompts. To generalize beyond subjects in the model's training data, we incorporate an identity-guided conditioning branch. Unlike prior anonymization methods, which lack control over facial attributes, our framework supports attribute-controllable anonymization. We demonstrate that our method achieves a state-of-the-art balance between identity removal, attribute preservation, and image quality. Source code and data are available at https://github.com/hanweikung/reverse-personalization .