< Explain other AI papers

BLIP3-KALE: Knowledge Augmented Large-Scale Dense Captions

Anas Awadalla, Le Xue, Manli Shu, An Yan, Jun Wang, Senthil Purushwalkam, Sheng Shen, Hannah Lee, Oscar Lo, Jae Sung Park, Etash Guha, Silvio Savarese, Ludwig Schmidt, Yejin Choi, Caiming Xiong, Ran Xu

2024-11-13

BLIP3-KALE: Knowledge Augmented Large-Scale Dense Captions

Summary

This paper talks about BLIP3-KALE, a new dataset that contains 218 million pairs of images and text. It aims to improve how machines understand and describe images by combining synthetic captions with factual information from the web.

What's the problem?

The problem is that many existing datasets for training machines to understand images rely on either descriptive captions that are not always accurate or factual alt-text that lacks detail. This makes it hard for models to learn effectively, as they need both rich descriptions and factual accuracy to perform well in real-world applications.

What's the solution?

The authors introduce BLIP3-KALE, which enhances synthetic image captions by adding factual information from web-scale alt-text. They use a two-stage approach that involves large vision-language models to create these knowledge-augmented captions. This allows them to train a specialized vision-language model (VLM) on this extensive dataset, improving its ability to understand and generate accurate image descriptions.

Why it matters?

This research is important because it provides a large and diverse dataset that can help train more capable AI models. By bridging the gap between descriptive and factual information, BLIP3-KALE can enhance various applications like image recognition, automated content generation, and accessibility tools for visually impaired users.

Abstract

We introduce BLIP3-KALE, a dataset of 218 million image-text pairs that bridges the gap between descriptive synthetic captions and factual web-scale alt-text. KALE augments synthetic dense image captions with web-scale alt-text to generate factually grounded image captions. Our two-stage approach leverages large vision-language models and language models to create knowledge-augmented captions, which are then used to train a specialized VLM for scaling up the dataset. We train vision-language models on KALE and demonstrate improvements on vision-language tasks. Our experiments show the utility of KALE for training more capable and knowledgeable multimodal models. We release the KALE dataset at https://huggingface.co/datasets/Salesforce/blip3-kale