< Explain other AI papers

NVILA: Efficient Frontier Visual Language Models

Zhijian Liu, Ligeng Zhu, Baifeng Shi, Zhuoyang Zhang, Yuming Lou, Shang Yang, Haocheng Xi, Shiyi Cao, Yuxian Gu, Dacheng Li, Xiuyu Li, Yunhao Fang, Yukang Chen, Cheng-Yu Hsieh, De-An Huang, An-Chieh Cheng, Vishwesh Nath, Jinyi Hu, Sifei Liu, Ranjay Krishna, Daguang Xu, Xiaolong Wang

2024-12-06

NVILA: Efficient Frontier Visual Language Models

Summary

This paper introduces NVILA, a new family of visual language models (VLMs) designed to improve both efficiency and accuracy in processing images and videos.

What's the problem?

While visual language models have become more accurate, they often require a lot of computing power and resources, making them less efficient. This inefficiency can lead to high costs and slower performance, which limits their practical use in real-world applications.

What's the solution?

NVILA addresses this problem by using a method called 'scale-then-compress.' First, it increases the detail of the images and videos it processes (scaling up spatial and temporal resolutions) and then compresses this information into fewer visual tokens. This allows NVILA to handle high-resolution images and long videos more effectively. Additionally, the researchers systematically improved the model's efficiency throughout its entire lifecycle, from training to deployment. As a result, NVILA can achieve similar or better accuracy compared to other leading models while significantly reducing training costs and processing times.

Why it matters?

This research is important because it makes advanced visual language models more accessible and practical for various applications, such as video analysis, document processing, and visual question answering. By improving efficiency without sacrificing accuracy, NVILA can help developers create faster and cheaper AI solutions that can be used in everyday technology.

Abstract

Visual language models (VLMs) have made significant advances in accuracy in recent years. However, their efficiency has received much less attention. This paper introduces NVILA, a family of open VLMs designed to optimize both efficiency and accuracy. Building on top of VILA, we improve its model architecture by first scaling up the spatial and temporal resolutions, and then compressing visual tokens. This "scale-then-compress" approach enables NVILA to efficiently process high-resolution images and long videos. We also conduct a systematic investigation to enhance the efficiency of NVILA throughout its entire lifecycle, from training and fine-tuning to deployment. NVILA matches or surpasses the accuracy of many leading open and proprietary VLMs across a wide range of image and video benchmarks. At the same time, it reduces training costs by 4.5X, fine-tuning memory usage by 3.4X, pre-filling latency by 1.6-2.2X, and decoding latency by 1.2-2.8X. We will soon make our code and models available to facilitate reproducibility.