When Preferences Diverge: Aligning Diffusion Models with Minority-Aware Adaptive DPO
Lingfan Zhang, Chen Liu, Chengming Xu, Kai Hu, Donghao Luo, Chengjie Wang, Yanwei Fu, Yuan Yao
2025-03-24
Summary
This paper is about improving how AI creates images, especially when different people have different ideas about what makes a good image.
What's the problem?
AI image generators are often trained to match general preferences, but this can ignore the preferences of smaller groups of people, leading to less diverse and potentially unfair results.
What's the solution?
The researchers developed a new method that takes into account the preferences of smaller groups when training the AI, making sure that their voices are heard and their preferences are reflected in the generated images.
Why it matters?
This work matters because it can lead to AI image generators that are more inclusive and better at creating images that appeal to a wider range of people.
Abstract
In recent years, the field of image generation has witnessed significant advancements, particularly in fine-tuning methods that align models with universal human preferences. This paper explores the critical role of preference data in the training process of diffusion models, particularly in the context of Diffusion-DPO and its subsequent adaptations. We investigate the complexities surrounding universal human preferences in image generation, highlighting the subjective nature of these preferences and the challenges posed by minority samples in preference datasets. Through pilot experiments, we demonstrate the existence of minority samples and their detrimental effects on model performance. We propose Adaptive-DPO -- a novel approach that incorporates a minority-instance-aware metric into the DPO objective. This metric, which includes intra-annotator confidence and inter-annotator stability, distinguishes between majority and minority samples. We introduce an Adaptive-DPO loss function which improves the DPO loss in two ways: enhancing the model's learning of majority labels while mitigating the negative impact of minority samples. Our experiments demonstrate that this method effectively handles both synthetic minority data and real-world preference data, paving the way for more effective training methodologies in image generation tasks.