< Explain other AI papers

Aligning Diffusion Models with Noise-Conditioned Perception

Alexander Gambashidze, Anton Kulikov, Yuriy Sosnin, Ilya Makarov

2024-06-26

Aligning Diffusion Models with Noise-Conditioned Perception

Summary

This paper discusses a new method for improving how diffusion models generate images by aligning them more closely with human preferences. It focuses on using a perceptual objective to enhance the quality and efficiency of the image generation process.

What's the problem?

Diffusion models, which are used to create images from text prompts, often struggle to produce results that align well with what humans find visually appealing. Traditional training methods optimize based on pixel values or other technical metrics, which can lead to slower training and less satisfying results for users. This disconnect makes it hard for these models to generate images that meet human expectations.

What's the solution?

The authors propose a new approach that uses a perceptual objective within the U-Net embedding space of diffusion models. By fine-tuning models like Stable Diffusion 1.5 and XL using techniques such as Direct Preference Optimization (DPO) and Contrastive Preference Optimization (CPO), they can better align the generated images with human preferences. Their experiments show that this method significantly improves the overall quality of the images while also reducing the computational resources needed for training.

Why it matters?

This research is important because it enhances the ability of diffusion models to create images that people actually want to see, making them more useful for applications like art generation, advertising, and content creation. By improving how these models understand and meet human preferences, it opens up new possibilities for creative industries and AI applications.

Abstract

Recent advancements in human preference optimization, initially developed for Language Models (LMs), have shown promise for text-to-image Diffusion Models, enhancing prompt alignment, visual appeal, and user preference. Unlike LMs, Diffusion Models typically optimize in pixel or VAE space, which does not align well with human perception, leading to slower and less efficient training during the preference alignment stage. We propose using a perceptual objective in the U-Net embedding space of the diffusion model to address these issues. Our approach involves fine-tuning Stable Diffusion 1.5 and XL using Direct Preference Optimization (DPO), Contrastive Preference Optimization (CPO), and supervised fine-tuning (SFT) within this embedding space. This method significantly outperforms standard latent-space implementations across various metrics, including quality and computational cost. For SDXL, our approach provides 60.8\% general preference, 62.2\% visual appeal, and 52.1\% prompt following against original open-sourced SDXL-DPO on the PartiPrompts dataset, while significantly reducing compute. Our approach not only improves the efficiency and quality of human preference alignment for diffusion models but is also easily integrable with other optimization techniques. The training code and LoRA weights will be available here: https://huggingface.co/alexgambashidze/SDXL\_NCP-DPO\_v0.1