< Explain other AI papers

Critic-V: VLM Critics Help Catch VLM Errors in Multimodal Reasoning

Di Zhang, Jingdi Lei, Junxian Li, Xunzhi Wang, Yujie Liu, Zonglin Yang, Jiatong Li, Weida Wang, Suorong Yang, Jianbo Wu, Peng Ye, Wanli Ouyang, Dongzhan Zhou

2024-11-29

Critic-V: VLM Critics Help Catch VLM Errors in Multimodal Reasoning

Summary

This paper introduces Critic-V, a new framework that helps improve the accuracy of vision-language models (VLMs) in reasoning tasks by using a system of critics to catch errors in their responses.

What's the problem?

Vision-language models have made great progress in understanding both images and text, but they often make mistakes or provide irrelevant answers. These errors can occur because the models sometimes misunderstand visual information or follow unclear reasoning paths, which limits their effectiveness in real-world applications.

What's the solution?

The authors propose Critic-V, which separates the reasoning process into two parts: a Reasoner that generates answers based on visual and textual inputs, and a Critic that reviews these answers and provides feedback. This feedback helps the Reasoner improve its responses, making it better at handling complex questions. The Critic uses a method called Direct Preference Optimization to enhance its ability to evaluate responses effectively. The results show that this framework significantly improves the performance of VLMs on various reasoning tasks compared to existing methods.

Why it matters?

This research is important because it enhances the reliability and accuracy of AI systems that need to understand and reason about visual information. By improving how VLMs handle complex tasks, Critic-V can be applied in areas like autonomous driving, robotics, and any application that relies on accurate image understanding combined with language processing.

Abstract

Vision-language models~(VLMs) have shown remarkable advancements in multimodal reasoning tasks. However, they still often generate inaccurate or irrelevant responses due to issues like hallucinated image understandings or unrefined reasoning paths. To address these challenges, we introduce Critic-V, a novel framework inspired by the Actor-Critic paradigm to boost the reasoning capability of VLMs. This framework decouples the reasoning process and critic process by integrating two independent components: the Reasoner, which generates reasoning paths based on visual and textual inputs, and the Critic, which provides constructive critique to refine these paths. In this approach, the Reasoner generates reasoning responses according to text prompts, which can evolve iteratively as a policy based on feedback from the Critic. This interaction process was theoretically driven by a reinforcement learning framework where the Critic offers natural language critiques instead of scalar rewards, enabling more nuanced feedback to boost the Reasoner's capability on complex reasoning tasks. The Critic model is trained using Direct Preference Optimization (DPO), leveraging a preference dataset of critiques ranked by Rule-based Reward(RBR) to enhance its critic capabilities. Evaluation results show that the Critic-V framework significantly outperforms existing methods, including GPT-4V, on 5 out of 8 benchmarks, especially regarding reasoning accuracy and efficiency. Combining a dynamic text-based policy for the Reasoner and constructive feedback from the preference-optimized Critic enables a more reliable and context-sensitive multimodal reasoning process. Our approach provides a promising solution to enhance the reliability of VLMs, improving their performance in real-world reasoning-heavy multimodal applications such as autonomous driving and embodied intelligence.