< Explain other AI papers

MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning

Xiangyu Zhao, Xiangtai Li, Haodong Duan, Haian Huang, Yining Li, Kai Chen, Hua Yang

2024-06-26

MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning

Summary

This paper introduces MG-LLaVA, a new type of multi-modal large language model (MLLM) designed to improve how models understand and process images. It focuses on enhancing visual capabilities by using different levels of image detail.

What's the problem?

Many existing MLLMs struggle with processing low-resolution images, which limits their ability to perform tasks that require detailed visual information. This is a significant issue because many applications, like image recognition and understanding complex scenes, need high-quality visuals to work effectively.

What's the solution?

MG-LLaVA addresses this problem by incorporating a multi-granularity vision flow that includes low-resolution, high-resolution, and object-specific features. The model adds a high-resolution visual encoder to capture fine details in images and combines these details with basic visual features using a special method called Conv-Gate fusion. Additionally, it uses object-level features from bounding boxes identified by offline detectors to improve its ability to recognize objects. The model has been trained on publicly available multimodal data and shows excellent performance in understanding images.

Why it matters?

This research is important because it enhances the capabilities of AI models in visual tasks, allowing them to better understand and interpret complex images. By improving how these models process visual information, MG-LLaVA can lead to advancements in various fields such as computer vision, robotics, and artificial intelligence applications that rely on accurate image analysis.

Abstract

Multi-modal large language models (MLLMs) have made significant strides in various visual understanding tasks. However, the majority of these models are constrained to process low-resolution images, which limits their effectiveness in perception tasks that necessitate detailed visual information. In our study, we present MG-LLaVA, an innovative MLLM that enhances the model's visual processing capabilities by incorporating a multi-granularity vision flow, which includes low-resolution, high-resolution, and object-centric features. We propose the integration of an additional high-resolution visual encoder to capture fine-grained details, which are then fused with base visual features through a Conv-Gate fusion network. To further refine the model's object recognition abilities, we incorporate object-level features derived from bounding boxes identified by offline detectors. Being trained solely on publicly available multimodal data through instruction tuning, MG-LLaVA demonstrates exceptional perception skills. We instantiate MG-LLaVA with a wide variety of language encoders, ranging from 3.8B to 34B, to evaluate the model's performance comprehensively. Extensive evaluations across multiple benchmarks demonstrate that MG-LLaVA outperforms existing MLLMs of comparable parameter sizes, showcasing its remarkable efficacy. The code will be available at https://github.com/PhoenixZ810/MG-LLaVA.