< Explain other AI papers

Unified Reward Model for Multimodal Understanding and Generation

Yibin Wang, Yuhang Zang, Hao Li, Cheng Jin, Jiaqi Wang

2025-03-09

Unified Reward Model for Multimodal Understanding and Generation

Summary

This paper talks about UnifiedReward, a new AI model that can evaluate how well other AI systems understand and create both images and videos

What's the problem?

Current AI models that judge the quality of AI-generated content are usually designed for specific tasks, like only evaluating images or only videos. This limits how well they can be used across different types of visual tasks

What's the solution?

The researchers created UnifiedReward, which can evaluate both image and video tasks, including both understanding and creating content. They trained it on a large dataset of human preferences, then used it to create high-quality data for training other AI models. UnifiedReward can compare two outputs (pairwise ranking) and give specific scores (pointwise scoring) to help improve other AI systems

Why it matters?

This matters because UnifiedReward can help improve many different types of AI systems that work with images and videos. By learning from multiple tasks at once, it can make each individual task better. This could lead to AI that's better at understanding and creating visual content across many applications, potentially improving things like image search, video analysis, and content creation tools

Abstract

Recent advances in human preference alignment have significantly enhanced multimodal generation and understanding. A key approach is training reward models to guide preference optimization. However, existing models are often task-specific, limiting their adaptability across diverse visual applications. We also argue that jointly learning to assess multiple tasks may foster a synergistic effect, where improved image understanding enhances image generation assessment, and refined image evaluation benefits video assessment through better frame analysis. To this end, this paper proposes UnifiedReward, the first unified reward model for multimodal understanding and generation assessment, enabling both pairwise ranking and pointwise scoring, which can be employed for vision model <PRE_TAG>preference alignment</POST_TAG>. Specifically, (1) we first develop UnifiedReward on our constructed large-scale human preference dataset, including both image and video generation/understanding tasks. (2) Then, it is utilized to automatically construct high-quality preference pair data based on the vision models, fine-gradually filtering their outputs through pair ranking and point sifting. (3) Finally, these data are used for their preference alignment through Direct Preference Optimization (DPO). Experimental results demonstrate that joint learning to assess diverse visual tasks can lead to substantial mutual benefits and we apply our pipeline to both image and video understanding/generation tasks, significantly improving the performance in each domain.