InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model
Yuhang Zang, Xiaoyi Dong, Pan Zhang, Yuhang Cao, Ziyu Liu, Shengyuan Ding, Shenxi Wu, Yubo Ma, Haodong Duan, Wenwei Zhang, Kai Chen, Dahua Lin, Jiaqi Wang
2025-01-22

Summary
This paper talks about InternLM-XComposer2.5-Reward (IXC-2.5-Reward), a new AI system that helps improve the performance of Large Vision Language Models (LVLMs). It's like creating a smart teacher that can guide these AI models to give better answers when working with text, images, and videos.
What's the problem?
Large Vision Language Models are really good at understanding and describing images and videos, but they sometimes make mistakes or give incorrect information. There aren't many publicly available tools to help improve these models, especially ones that can work with both text and visual information. It's like having a student who's generally good but sometimes gets things wrong, and not having enough tutors to help them improve.
What's the solution?
The researchers created IXC-2.5-Reward, which acts like a smart tutor for LVLMs. They trained this system using a large collection of high-quality examples covering various topics like following instructions, general understanding, reading documents, math problems, and video comprehension. This 'tutor' can now guide LVLMs to give better answers by helping them understand what humans prefer. The researchers also showed how IXC-2.5-Reward can be used in different ways: to train LVLMs to follow instructions better, to pick the best answer from a set of options, and to clean up messy training data.
Why it matters?
This matters because it could make AI systems that work with images and videos much more reliable and useful. Imagine having an AI assistant that can not only understand what you're asking about a picture or video but also give you more accurate and helpful answers. This could be really useful in fields like education, where AI could help explain complex visual concepts, or in content creation, where AI could assist in generating better descriptions or captions for images and videos. By making these AI models more accurate and aligned with what humans want, we're taking a big step towards creating AI that can be more trustworthy and helpful in our daily lives.
Abstract
Despite the promising performance of Large Vision Language Models (LVLMs) in visual understanding, they occasionally generate incorrect outputs. While reward models (RMs) with reinforcement learning or test-time scaling offer the potential for improving generation quality, a critical gap remains: publicly available multi-modal RMs for LVLMs are scarce, and the implementation details of proprietary models are often unclear. We bridge this gap with InternLM-XComposer2.5-Reward (IXC-2.5-Reward), a simple yet effective multi-modal reward model that aligns LVLMs with human preferences. To ensure the robustness and versatility of IXC-2.5-Reward, we set up a high-quality multi-modal preference corpus spanning text, image, and video inputs across diverse domains, such as instruction following, general understanding, text-rich documents, mathematical reasoning, and video understanding. IXC-2.5-Reward achieves excellent results on the latest multi-modal reward model benchmark and shows competitive performance on text-only reward model benchmarks. We further demonstrate three key applications of IXC-2.5-Reward: (1) Providing a supervisory signal for RL training. We integrate IXC-2.5-Reward with Proximal Policy Optimization (PPO) yields IXC-2.5-Chat, which shows consistent improvements in instruction following and multi-modal open-ended dialogue; (2) Selecting the best response from candidate responses for test-time scaling; and (3) Filtering outlier or noisy samples from existing image and video instruction tuning training data. To ensure reproducibility and facilitate further research, we have open-sourced all model weights and training recipes at https://github.com/InternLM/InternLM-XComposer