Video Generation Models Are Good Latent Reward Models
Xiaoyue Mi, Wenqing Yu, Jiesong Lian, Shibo Jie, Ruizhe Zhong, Zijun Liu, Guozhen Zhang, Zixiang Zhou, Zhiyong Xu, Yuan Zhou, Qinglin Lu, Fan Tang
2025-11-28
Summary
This paper explores a more efficient way to train AI to create videos that people actually like, building on a technique called 'reward feedback learning'.
What's the problem?
Currently, when trying to improve video generation using human preferences, the process is really slow and uses a lot of computer memory. This is because the AI needs to fully create the video (decode it) before getting feedback on how good it is. This happens at the very end of the video creation process, meaning the AI only learns to fix visual details, not the overall movement or structure of the video, and it's computationally expensive.
What's the solution?
The researchers realized that the AI models already used to *make* videos are actually good at understanding and evaluating videos even when they're not fully formed – they can work with the 'rough draft' versions. So, they developed a new method, called Process Reward Feedback Learning, that gives the AI feedback on these rough drafts *throughout* the entire video creation process, instead of just at the end. This allows for faster training, uses less memory, and helps the AI learn to create videos with better overall motion and structure.
Why it matters?
This new approach is important because it makes it much more practical to train AI to generate high-quality videos that align with what people want. By reducing the computational cost and improving the learning process, it opens the door to creating more sophisticated and personalized video content.
Abstract
Reward feedback learning (ReFL) has proven effective for aligning image generation with human preferences. However, its extension to video generation faces significant challenges. Existing video reward models rely on vision-language models designed for pixel-space inputs, confining ReFL optimization to near-complete denoising steps after computationally expensive VAE decoding. This pixel-space approach incurs substantial memory overhead and increased training time, and its late-stage optimization lacks early-stage supervision, refining only visual quality rather than fundamental motion dynamics and structural coherence. In this work, we show that pre-trained video generation models are naturally suited for reward modeling in the noisy latent space, as they are explicitly designed to process noisy latent representations at arbitrary timesteps and inherently preserve temporal information through their sequential modeling capabilities. Accordingly, we propose Process Reward Feedback Learning~(PRFL), a framework that conducts preference optimization entirely in latent space, enabling efficient gradient backpropagation throughout the full denoising chain without VAE decoding. Extensive experiments demonstrate that PRFL significantly improves alignment with human preferences, while achieving substantial reductions in memory consumption and training time compared to RGB ReFL.