Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs
Yuxuan Qiao, Haodong Duan, Xinyu Fang, Junming Yang, Lin Chen, Songyang Zhang, Jiaqi Wang, Dahua Lin, Kai Chen
2024-06-21

Summary
This paper introduces Prism, a new framework designed to improve how we evaluate vision-language models (VLMs) by separating their ability to perceive visual information from their ability to reason about that information.
What's the problem?
Vision-language models are advanced AI systems that can understand and answer questions about images. However, assessing how well these models can perceive (see and describe) and reason (think and respond) is challenging because these abilities are often mixed together in existing models. This makes it hard to know which part of the model is performing well and which part needs improvement.
What's the solution?
The researchers developed Prism, which has two main stages: the perception stage, where the model extracts visual information from images and describes it in text, and the reasoning stage, where another model (a large language model or LLM) answers questions based on that text. This separation allows for a clearer analysis of each model's strengths and weaknesses. The framework can be used to compare different VLMs systematically, providing insights into their performance. The results showed that Prism could achieve high performance with smaller models by effectively combining perception and reasoning.
Why it matters?
This research is significant because it provides a structured way to evaluate and improve vision-language models, making them more effective for tasks like answering questions about images or understanding visual content. By enhancing these models, we can improve applications in areas such as education, healthcare, and entertainment, where understanding visual information is crucial.
Abstract
Vision Language Models (VLMs) demonstrate remarkable proficiency in addressing a wide array of visual questions, which requires strong perception and reasoning faculties. Assessing these two competencies independently is crucial for model refinement, despite the inherent difficulty due to the intertwined nature of seeing and reasoning in existing VLMs. To tackle this issue, we present Prism, an innovative framework designed to disentangle the perception and reasoning processes involved in visual question solving. Prism comprises two distinct stages: a perception stage that utilizes a VLM to extract and articulate visual information in textual form, and a reasoning stage that formulates responses based on the extracted visual information using a Large Language Model (LLM). This modular design enables the systematic comparison and assessment of both proprietary and open-source VLM for their perception and reasoning strengths. Our analytical framework provides several valuable insights, underscoring Prism's potential as a cost-effective solution for vision-language tasks. By combining a streamlined VLM focused on perception with a powerful LLM tailored for reasoning, Prism achieves superior results in general vision-language tasks while substantially cutting down on training and operational expenses. Quantitative evaluations show that Prism, when configured with a vanilla 2B LLaVA and freely accessible GPT-3.5, delivers performance on par with VLMs 10 times larger on the rigorous multimodal benchmark MMStar. The project is released at: https://github.com/SparksJoe/Prism.