< Explain other AI papers

RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval

Di Liu, Meng Chen, Baotong Lu, Huiqiang Jiang, Zhenhua Han, Qianxi Zhang, Qi Chen, Chengruidong Zhang, Bailu Ding, Kai Zhang, Chen Chen, Fan Yang, Yuqing Yang, Lili Qiu

2024-09-17

RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval

Summary

This paper introduces RetrievalAttention, a new method designed to speed up the way large language models (LLMs) process long pieces of text by using a smart way to retrieve relevant information.

What's the problem?

Large language models are powerful but can be slow and use a lot of memory when analyzing long texts. This is mainly because they have to look at all the information in a complex way, which takes time and resources. As the amount of text increases, the processing time grows significantly, making it hard to use these models effectively.

What's the solution?

RetrievalAttention solves this problem by using a technique called approximate nearest neighbor search (ANNS) to quickly find the most relevant pieces of information needed for processing. Instead of examining all data, it only looks at a small percentage (1-3%) of the most relevant data points during generation. This is done using an advanced vector search algorithm that adapts to the specific queries being made, which helps improve speed and efficiency without losing accuracy.

Why it matters?

This research is important because it allows large language models to handle longer texts more efficiently, making them faster and less demanding on computer resources. This improvement can enhance various applications like chatbots, translation services, and content generation tools, making them more practical for everyday use.

Abstract

Transformer-based large Language Models (LLMs) become increasingly important in various domains. However, the quadratic time complexity of attention operation poses a significant challenge for scaling to longer contexts due to the extremely high inference latency and GPU memory consumption for caching key-value (KV) vectors. This paper proposes RetrievalAttention, a training-free approach to accelerate attention computation. To leverage the dynamic sparse property of attention, RetrievalAttention builds approximate nearest neighbor search (ANNS) indexes upon KV vectors in CPU memory and retrieves the most relevant ones via vector search during generation. Due to the out-of-distribution (OOD) between query vectors and key vectors, off-the-shelf ANNS indexes still need to scan O(N) (usually 30% of all keys) data for accurate retrieval, which fails to exploit the high sparsity. RetrievalAttention first identifies the OOD challenge of ANNS-based attention, and addresses it via an attention-aware vector search algorithm that can adapt to queries and only access 1--3% of data, thus achieving a sub-linear time complexity. RetrievalAttention greatly reduces the inference cost of long-context LLM with much lower GPU memory requirements while maintaining the model accuracy. Especially, RetrievalAttention only needs 16GB GPU memory for serving 128K tokens in LLMs with 8B parameters, which is capable of generating one token in 0.188 seconds on a single NVIDIA RTX4090 (24GB).