SePPO: Semi-Policy Preference Optimization for Diffusion Alignment
Daoan Zhang, Guangchen Lan, Dong-Jun Han, Wenlin Yao, Xiaoman Pan, Hongming Zhang, Mingxiao Li, Pengcheng Chen, Yu Dong, Christopher Brinton, Jiebo Luo
2024-10-08

Summary
This paper introduces SePPO, a new method for improving how diffusion models generate images and videos by using human preferences without needing complicated reward systems or large amounts of paired data.
What's the problem?
Current methods that use reinforcement learning from human feedback (RLHF) to improve diffusion models face challenges. On-policy strategies struggle to generalize well because they depend heavily on a reward model that can be limited. Off-policy methods, on the other hand, require a lot of hard-to-get data that pairs human feedback with generated images, making them difficult to implement effectively.
What's the solution?
The authors propose the Semi-Policy Preference Optimization (SePPO) method, which aligns diffusion models with human preferences without needing reward models or extensive paired data. SePPO uses previous model versions as references to create samples that help the model learn what works best. Instead of treating all reference samples as negative examples, it assesses which ones are likely to be good or bad for learning, allowing the model to focus on improving its outputs. This method has been tested and shown to outperform existing approaches in generating images and videos.
Why it matters?
This research is important because it makes it easier and more efficient to train AI models in generating high-quality images and videos based on human preferences. By reducing the need for complex data collection and improving the learning process, SePPO could lead to advancements in various applications like digital art, gaming, and content creation.
Abstract
Reinforcement learning from human feedback (RLHF) methods are emerging as a way to fine-tune diffusion models (DMs) for visual generation. However, commonly used on-policy strategies are limited by the generalization capability of the reward model, while off-policy approaches require large amounts of difficult-to-obtain paired human-annotated data, particularly in visual generation tasks. To address the limitations of both on- and off-policy RLHF, we propose a preference optimization method that aligns DMs with preferences without relying on reward models or paired human-annotated data. Specifically, we introduce a Semi-Policy Preference Optimization (SePPO) method. SePPO leverages previous checkpoints as reference models while using them to generate on-policy reference samples, which replace "losing images" in preference pairs. This approach allows us to optimize using only off-policy "winning images." Furthermore, we design a strategy for reference model selection that expands the exploration in the policy space. Notably, we do not simply treat reference samples as negative examples for learning. Instead, we design an anchor-based criterion to assess whether the reference samples are likely to be winning or losing images, allowing the model to selectively learn from the generated reference samples. This approach mitigates performance degradation caused by the uncertainty in reference sample quality. We validate SePPO across both text-to-image and text-to-video benchmarks. SePPO surpasses all previous approaches on the text-to-image benchmarks and also demonstrates outstanding performance on the text-to-video benchmarks. Code will be released in https://github.com/DwanZhang-AI/SePPO.