Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization
Weiyun Wang, Zhe Chen, Wenhai Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Jinguo Zhu, Xizhou Zhu, Lewei Lu, Yu Qiao, Jifeng Dai
2024-11-22

Summary
This paper discusses a new method called Mixed Preference Optimization (MPO) that improves the reasoning abilities of multimodal large language models (MLLMs), which can understand both text and images.
What's the problem?
Current MLLMs often struggle with reasoning tasks, especially when they need to connect information from different types of data, like text and images. They also face issues with distribution shifts, meaning their performance can drop when applied to new types of data. This affects their ability to perform well in Chain-of-Thought (CoT) reasoning, which is crucial for complex problem-solving.
What's the solution?
To address these challenges, the authors developed MPO, which enhances MLLMs by optimizing how they learn from both text and image data. They created a large dataset specifically for multimodal reasoning and designed an automated process to generate this data. The MPO method combines different types of training losses to improve the model's performance in reasoning tasks. Experiments showed that their model, InternVL2-8B-MPO, achieved better accuracy compared to previous models, demonstrating significant improvements in reasoning abilities.
Why it matters?
This research is important because it helps make AI systems better at understanding and reasoning about complex information from multiple sources. By improving how MLLMs learn from both text and images, this work can lead to more advanced AI applications in fields like education, healthcare, and any area where interpreting mixed data is essential.
Abstract
Existing open-source multimodal large language models (MLLMs) generally follow a training process involving pre-training and supervised fine-tuning. However, these models suffer from distribution shifts, which limit their multimodal reasoning, particularly in the Chain-of-Thought (CoT) performance. To address this, we introduce a preference optimization (PO) process to enhance the multimodal reasoning capabilities of MLLMs. Specifically, (1) on the data side, we design an automated preference data construction pipeline to create MMPR, a high-quality, large-scale multimodal reasoning preference dataset. and (2) on the model side, we explore integrating PO with MLLMs, developing a simple yet effective method, termed Mixed Preference Optimization (MPO), which boosts multimodal CoT performance. Our approach demonstrates improved performance across multiple benchmarks, particularly in multimodal reasoning tasks. Notably, our model, InternVL2-8B-MPO, achieves an accuracy of 67.0 on MathVista, outperforming InternVL2-8B by 8.7 points and achieving performance comparable to the 10x larger InternVL2-76B. We hope this study could inspire further advancements in MLLMs. Code, data, and model shall be publicly released.