Auditing Prompt Caching in Language Model APIs
Chenchen Gu, Xiang Lisa Li, Rohith Kuditipudi, Percy Liang, Tatsunori Hashimoto
2025-02-12

Summary
This paper talks about how caching prompts in large language models (LLMs) can create security risks, like exposing private information, and how the researchers developed a way to test for these risks using timing differences.
What's the problem?
When LLMs cache prompts to speed up responses, it can lead to privacy issues. If the cache is shared across users, someone could figure out what other users have asked based on how fast the system responds. This could leak sensitive information. Additionally, timing differences from caching might reveal details about the model's internal design, which companies often want to keep private.
What's the solution?
The researchers created a tool called C-BOD to audit prompt caching systems. It works by slightly changing the wording of prompts and measuring how fast the system responds. Using this method, they found that seven major API providers, including OpenAI, share caches across users, which could lead to privacy leaks. They also discovered clues about OpenAI's model architecture through these timing differences.
Why it matters?
This matters because prompt caching is widely used to make AI systems faster and cheaper, but it comes with hidden risks. By identifying these issues, this research pushes companies to be more transparent and careful about how they implement caching. It also helps improve the security and privacy of AI systems, which is crucial as they become more common in everyday life.
Abstract
Prompt caching in large language models (LLMs) results in data-dependent timing variations: cached prompts are processed faster than non-cached prompts. These timing differences introduce the risk of side-channel timing attacks. For example, if the cache is shared across users, an attacker could identify cached prompts from fast API response times to learn information about other users' prompts. Because prompt caching may cause privacy leakage, transparency around the caching policies of API providers is important. To this end, we develop and conduct statistical audits to detect prompt caching in real-world LLM API providers. We detect global cache sharing across users in seven API providers, including OpenAI, resulting in potential privacy leakage about users' prompts. Timing variations due to prompt caching can also result in leakage of information about model architecture. Namely, we find evidence that OpenAI's embedding model is a decoder-only Transformer, which was previously not publicly known.