Adaptive Semantic Prompt Caching with VectorQ
Luis Gaspar Schroeder, Shu Liu, Alejandro Cuadron, Mark Zhao, Stephan Krusche, Alfons Kemper, Matei Zaharia, Joseph E. Gonzalez
2025-02-10
Summary
This paper talks about VectorQ, a new system that improves how AI models reuse previously generated responses to similar questions, making them faster and more accurate.
What's the problem?
Current AI systems use a fixed rule to decide if a new question is similar enough to a previous one to reuse the old answer. This one-size-fits-all approach doesn't work well for all types of questions, leading to slow responses or incorrect answers.
What's the solution?
The researchers created VectorQ, which learns to adjust its similarity rules based on the specific type of question being asked. Instead of using one fixed threshold, VectorQ uses different thresholds for different types of questions, adapting to how complex or uncertain each question is.
Why it matters?
This matters because it makes AI systems much faster and more accurate when answering questions. VectorQ can increase the number of times an AI reuses previous answers by up to 12 times, while also reducing errors by up to 92%. This means AI assistants can respond more quickly and accurately to a wider range of questions, making them more useful in real-world applications.
Abstract
Semantic prompt caches reduce the latency and cost of large language model (LLM) inference by reusing cached LLM-generated responses for semantically similar prompts. Vector similarity metrics assign a numerical score to quantify the similarity between an embedded prompt and its nearest neighbor in the cache. Existing systems rely on a static threshold to classify whether the similarity score is sufficiently high to result in a cache hit. We show that this one-size-fits-all threshold is insufficient across different prompts. We propose VectorQ, a framework to learn <PRE_TAG>embedding-specific threshold regions</POST_TAG> that adapt to the complexity and uncertainty of an embedding. Through evaluations on a combination of four diverse datasets, we show that VectorQ consistently outperforms state-of-the-art systems across all static thresholds, achieving up to 12x increases in <PRE_TAG>cache hit rate</POST_TAG> and error rate reductions up to 92%.