< Explain other AI papers

Efficient Inference of Vision Instruction-Following Models with Elastic Cache

Zuyan Liu, Benlin Liu, Jiahui Wang, Yuhao Dong, Guangyi Chen, Yongming Rao, Ranjay Krishna, Jiwen Lu

2024-07-26

Efficient Inference of Vision Instruction-Following Models with Elastic Cache

Summary

This paper presents a new method called Elastic Cache, designed to improve the efficiency of large vision-language models (LVLMs) that follow instructions. It focuses on managing memory more effectively during the model's operation.

What's the problem?

Large vision-language models require a lot of memory to store key-value (KV) caches, which hold important information for understanding and generating responses. Traditional methods for managing these caches often just remove less important data, which can lead to losing valuable context needed for accurate instruction following. This makes it difficult for the models to operate efficiently, especially in complex tasks.

What's the solution?

The authors of this paper introduced Elastic Cache, which uses a smarter way to manage the KV caches by merging important data instead of simply deleting it. They developed a system that identifies key pieces of information (called anchor points) and combines surrounding less important data with these anchors. This helps keep important context while reducing memory usage. They also created specific strategies for different stages of the model's operation: one for understanding instructions and another for generating outputs. Their tests showed that Elastic Cache improves performance significantly compared to older methods.

Why it matters?

This research is significant because it allows large vision-language models to work more efficiently without losing important information. By enhancing how these models manage memory, Elastic Cache can lead to better performance in tasks like image recognition and language processing, making AI systems more effective in real-world applications.

Abstract

In the field of instruction-following large vision-language models (LVLMs), the efficient deployment of these models faces challenges, notably due to the high memory demands of their key-value (KV) caches. Conventional cache management strategies for LLMs focus on cache eviction, which often fails to address the specific needs of multimodal instruction-following models. Recognizing this gap, in this paper, we introduce Elastic Cache, a novel approach that benefits from applying distinct acceleration methods for instruction encoding and output generation stages. We investigate the metrics of importance in different stages and propose an importance-driven cache merging strategy to prune redundancy caches. Instead of discarding less important caches, our strategy identifies important key/value vectors as anchor points. Surrounding less important caches are then merged with these anchors, enhancing the preservation of contextual information in the KV caches while yielding an arbitrary acceleration ratio. For instruction encoding, we utilize the frequency to evaluate the importance of caches. Regarding output generation, we prioritize tokens based on their distance with an offset, by which both the initial and most recent tokens are retained. Results on a range of LVLMs demonstrate that Elastic Cache not only boosts efficiency but also notably outperforms existing pruning methods in language generation across various tasks. Code is available at https://github.com/liuzuyan/ElasticCache