< Explain other AI papers

DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception

Xiaotong Li, Fan Zhang, Haiwen Diao, Yueze Wang, Xinlong Wang, Ling-Yu Duan

2024-07-13

DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception

Summary

This paper presents DenseFusion-1M, a new dataset designed to improve how multimodal large language models (MLLMs) understand and describe images. It focuses on creating detailed image-text pairs that help these models better grasp complex visual information.

What's the problem?

Current MLLMs struggle with understanding various visual elements in images, such as multiple objects, text, and their spatial relationships. A major issue is the lack of high-quality datasets that provide detailed descriptions of images. Existing captioning engines often fail to produce complete and accurate annotations, which limits the models' ability to learn effectively.

What's the solution?

To address this problem, the authors propose a method called Perceptual Fusion. This approach combines insights from different 'vision experts' to create a more effective captioning engine that generates detailed descriptions for images. They selected 1 million representative images from a large dataset and used their new engine to produce dense descriptions for these images, resulting in the DenseFusion-1M dataset. This dataset significantly enhances the performance of MLLMs in understanding visual content.

Why it matters?

This research is important because it provides a much-needed resource for training AI models that can accurately interpret and generate descriptions of complex visual information. By improving the quality of image-text datasets, DenseFusion-1M can help advance the capabilities of AI in various applications, such as image recognition, automated content creation, and better human-computer interaction.

Abstract

Existing Multimodal Large Language Models (MLLMs) increasingly emphasize complex understanding of various visual elements, including multiple objects, text information, and spatial relations. Their development for comprehensive visual perception hinges on the availability of high-quality image-text datasets that offer diverse visual elements and throughout image descriptions. However, the scarcity of such hyper-detailed datasets currently hinders progress within the MLLM community. The bottleneck stems from the limited perceptual capabilities of current caption engines, which fall short in providing complete and accurate annotations. To facilitate the cutting-edge research of MLLMs on comprehensive vision perception, we thereby propose Perceptual Fusion, using a low-budget but highly effective caption engine for complete and accurate image descriptions. Specifically, Perceptual Fusion integrates diverse perception experts as image priors to provide explicit information on visual elements and adopts an efficient MLLM as a centric pivot to mimic advanced MLLMs' perception abilities. We carefully select 1M highly representative images from uncurated LAION dataset and generate dense descriptions using our engine, dubbed DenseFusion-1M. Extensive experiments validate that our engine outperforms its counterparts, where the resulting dataset significantly improves the perception and cognition abilities of existing MLLMs across diverse vision-language benchmarks, especially with high-resolution images as inputs. The dataset and code are publicly available at https://github.com/baaivision/DenseFusion.