< Explain other AI papers

Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data

Yucheng Shi, Quanzheng Li, Jin Sun, Xiang Li, Ninghao Liu

2025-02-21

Enhancing Cognition and Explainability of Multimodal Foundation Models
  with Self-Synthesized Data

Summary

This paper talks about a method to improve how AI models understand and explain visual tasks, like identifying objects in images, by using a process called self-synthesized data generation. It helps the AI provide better and clearer explanations for its decisions.

What's the problem?

Large multimodal models (AI that works with both images and text) are good at general tasks but struggle with fine details, like understanding specific features in an image or explaining why they made a certain decision. This makes it hard to trust their predictions, especially in specialized areas where accuracy and reasoning are crucial.

What's the solution?

The researchers created a framework that generates its own training data by synthesizing answers based on expert-defined features. These answers are checked for quality and then used to fine-tune the AI model in multiple rounds. This process improves the model's ability to focus on important details in images and connect them to logical explanations. Over time, the AI becomes better at both making accurate predictions and explaining them in a way that humans can understand.

Why it matters?

This matters because it makes AI systems more reliable and easier to trust, especially for tasks that require detailed reasoning, like medical image analysis or scientific research. By improving both accuracy and explainability, this approach helps bridge the gap between what AI can do and how well humans can understand and use its results in critical applications.

Abstract

Large multimodal models (LMMs) have shown impressive capabilities in a wide range of visual tasks. However, they often struggle with fine-grained visual reasoning, failing to identify domain-specific objectives and provide justifiable explanations for their predictions. To address this, we propose a novel visual rejection sampling framework to improve the cognition and explainability of LMMs using self-synthesized data. Specifically, visual fine-tuning requires images, queries, and target answers. Our approach begins by synthesizing interpretable answers that include human-verifiable visual features. These features are based on expert-defined concepts, carefully selected based on their alignment with the image content. After each round of fine-tuning, we apply a reward model-free filtering mechanism to select the highest-quality interpretable answers for the next round of tuning. This iterative process of data synthesis and fine-tuning progressively improves the model's ability to generate accurate and reasonable explanations. Experimental results demonstrate the effectiveness of our method in improving both the accuracy and explainability of specialized visual classification tasks.