< Explain other AI papers

$VILA^2$: VILA Augmented VILA

Yunhao Fang, Ligeng Zhu, Yao Lu, Yan Wang, Pavlo Molchanov, Jang Hyun Cho, Marco Pavone, Song Han, Hongxu Yin

2024-07-25

$VILA^2$: VILA Augmented VILA

Summary

This paper presents $VILA^2$, a new approach to improving visual language models (VLMs) by enhancing the quality of the data they learn from. It introduces a method that helps these models learn better by refining their training data and incorporating specialized knowledge.

What's the problem?

As VLMs have become more advanced, the quality of the data they use for training has not kept up. Many existing methods either gather more data from the internet, which may not be reliable, or rely on other models that limit their performance. This can lead to problems where models do not learn effectively or produce inaccurate results due to poor-quality training data.

What's the solution?

$VILA^2$ addresses this issue with a two-step process: first, it uses a self-augmentation step where the VLM improves its own training data by recaptioning it, which means rewriting the descriptions of the data to make them clearer and more accurate. Then, it retrains itself on this improved data. Once this process reaches its limit, $VILA^2$ brings in specialized VLMs that are experts in certain areas to further enhance the general model's knowledge through targeted recaptioning and retraining. This combined approach leads to better performance across various tasks.

Why it matters?

$VILA^2$ is significant because it allows VLMs to learn from higher-quality data without needing extensive human input, making the training process more efficient and cost-effective. By improving how these models understand and generate visual content, $VILA^2$ sets new standards for performance in tasks like visual question answering and image captioning, ultimately contributing to advancements in AI technology.

Abstract

Visual language models (VLMs) have rapidly progressed, driven by the success of large language models (LLMs). While model architectures and training infrastructures advance rapidly, data curation remains under-explored. When data quantity and quality become a bottleneck, existing work either directly crawls more raw data from the Internet that does not have a guarantee of data quality or distills from black-box commercial models (e.g., GPT-4V / Gemini) causing the performance upper bounded by that model. In this work, we introduce a novel approach that includes a self-augment step and a specialist-augment step to iteratively improve data quality and model performance. In the self-augment step, a VLM recaptions its own pretraining data to enhance data quality, and then retrains from scratch using this refined dataset to improve model performance. This process can iterate for several rounds. Once self-augmentation saturates, we employ several specialist VLMs finetuned from the self-augmented VLM with domain-specific expertise, to further infuse specialist knowledge into the generalist VLM through task-oriented recaptioning and retraining. With the combined self-augmented and specialist-augmented training, we introduce VILA^2 (VILA-augmented-VILA), a VLM family that consistently improves the accuracy on a wide range of tasks over prior art, and achieves new state-of-the-art results on MMMU leaderboard among open-sourced models.