APE: Faster and Longer Context-Augmented Generation via Adaptive Parallel Encoding
Xinyu Yang, Tianqi Chen, Beidi Chen
2025-02-11
Summary
This paper talks about Adaptive Parallel Encoding (APE), a new method that makes AI systems faster and better at using large amounts of information to answer questions or generate responses.
What's the problem?
When AI systems combine lots of context to generate responses, they often have to reprocess all the information every time, which is very slow. Current methods for speeding this up, like parallel encoding, can lead to a drop in performance because they don’t handle attention properly.
What's the solution?
The researchers created APE, which uses techniques like shared prefixes and adjustments to attention to fix the problems with parallel encoding. This allows the AI to pre-process and store information efficiently, so it can reuse it without losing accuracy. APE speeds up the process significantly while still maintaining high performance in tasks like Retrieval-Augmented Generation (RAG) and In-Context Learning (ICL).
Why it matters?
This matters because it makes AI systems much faster and more efficient at handling large amounts of information. This could improve applications like chatbots, search engines, or any system that needs to quickly process and respond to complex queries, making them more practical for real-world use.
Abstract
Context-augmented generation (CAG) techniques, including RAG and ICL, require the efficient combination of multiple contexts to generate responses to user queries. Directly inputting these contexts as a sequence introduces a considerable computational burden by re-encoding the combined selection of contexts for every request. To address this, we explore the promising potential of parallel encoding to independently pre-compute and cache each context's KV states. This approach enables the direct loading of cached states during inference while accommodating more contexts through position reuse across contexts. However, due to misalignments in attention distribution, directly applying parallel encoding results in a significant performance drop. To enable effective and efficient CAG, we propose Adaptive Parallel Encoding (APE), which brings shared prefix, attention temperature, and scaling factor to align the distribution of parallel encoding with sequential encoding. Results on RAG and ICL tasks demonstrate that APE can preserve 98% and 93% sequential encoding performance using the same inputs while outperforming parallel encoding by 3.6% and 7.9%, respectively. It also scales to many-shot CAG, effectively encoding hundreds of contexts in parallel. Efficiency evaluation shows that APE can achieve an end-to-end 4.5times speedup by reducing 28times prefilling time for a 128K-length context.