Skip to content

Commit 0a42d95

Browse files
authored
Merge pull request #21 from Sherlock113/docs/prefix-caching
docs: Update prefix caching
2 parents 7cd46d7 + bd4ef7d commit 0a42d95

File tree

5 files changed

+95
-29
lines changed

5 files changed

+95
-29
lines changed

docs/inference-optimization/data-tensor-pipeline-expert-hybrid-parallelism.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 8
2+
sidebar_position: 9
33
description: Understand the differences between data, tensor, pipeline, expert and hybrid parallelisms.
44
keywords:
55
- LLM inference optimization

docs/inference-optimization/kv-cache-utilization-aware-load-balancing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 6
2+
sidebar_position: 8
33
description: Route LLM requests based on KV cache usage for faster, smarter inference.
44
keywords:
55
- KV cache

docs/inference-optimization/offline-batch-inference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
sidebar_position: 9
2+
sidebar_position: 10
33
description: Run predictions at scale with offline batch inference for efficient, non-real-time processing.
44
keywords:
55
- Offline batch inference
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
---
2+
sidebar_position: 7
3+
description: Challenges in applying prefix caching
4+
keywords:
5+
- Prefix caching, prompt caching, context caching
6+
- KV cache, KV caching
7+
- Prefix cache-aware routing
8+
- Distributed inference, distributed LLM inference
9+
- Inference optimization
10+
- Dynamo, SGLang, vLLM, llm-d
11+
---
12+
13+
# Prefix cache-aware routing
14+
15+
In practice, applying prefix caching in a distributed way still has challenges. For example:
16+
17+
- How can a new request be routed to the worker that already has the right prefix cached?
18+
- How does the router know what’s in each worker’s cache?
19+
20+
![prefix-caching-aware-routing.png](./img/prefix-caching-aware-routing.png)
21+
22+
Different open-source projects are exploring their own approaches to prefix cache-aware routing:
23+
24+
- **Worker-reported prefix status**
25+
26+
[Dynamo](https://github.com/ai-dynamo/dynamo) has workers actively report which prefixes they’ve cached. The router then uses this real-time data to make smart routing decisions.
27+
28+
- **Router-predicted cache status**
29+
30+
[SGLang](https://github.com/sgl-project/sglang) maintains an approximate radix tree for each worker based on past requests. This helps the router predict which worker is most likely to have the needed prefix, without constant updates from the workers.
31+
32+
- **Hybrid efforts**
33+
- The Gateway API Inference Extension project is [exploring multiple strategies to implement a routing algorithm on EPP](https://github.com/kubernetes-sigs/gateway-api-inference-extension/issues/498):
34+
- **Prefix affinity consistent hashing**: Group requests with similar prefixes to the same worker.
35+
- **Approximate prefix cache on the router**: Let the router maintain an approximate lookup cache of the prefix caches on all the backend servers.
36+
- **Accurate prefix cache on the router**: Gather KV cache information reported by model servers.
37+
- The [llm-d](https://github.com/llm-d/llm-d) project uses a component called Inference Scheduler to implement filtering and scoring algorithms, and makes routing decisions based on a combination of factors like cache availability, prefill/decode status, SLA and load.
Lines changed: 55 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,34 @@
11
---
2-
sidebar_position: 7
2+
sidebar_position: 6
33
description: Prefix caching speeds up LLM inference by reusing shared prompt KV cache across requests.
44
keywords:
5-
- Prefix caching, prompt caching
6-
- KV cache
7-
- Prefix cache-aware routing
5+
- Prefix caching, prompt caching, context caching
6+
- KV cache, KV caching
87
- Distributed inference, distributed LLM inference
98
- Inference optimization
109
- Dynamo, SGLang, vLLM, llm-d
1110
---
1211

1312
import LinkList from '@site/src/components/LinkList';
13+
import Button from '@site/src/components/Button';
1414

1515
# Prefix caching
1616

17-
The term "KV cache" originally described caching within a single inference request. As mentioned previously, LLMs work autoregressively during decode as they output the next new token based on the previously generated tokens (i.e. reusing their KV cache). Without the KV cache, the model needs to recompute everything for the previous tokens in each decode step, which would be a huge waste of resources.
17+
Prefix caching (also known as prompt caching or context caching) is one of the most effective techniques to reduce latency and cost in LLM inference. It's especially useful in production workloads with repeated prompt structures, such as chat systems, AI agents, and RAG pipelines.
1818

19-
When extending this caching concept across multiple requests, it’s more accurate to call it **prefix caching** or **prompt caching**. The idea is simple: By caching the KV cache of an existing query, a new query that shares the same prefix can skip recomputing that part of the prompt. Instead, it directly reuses the cached results, reducing computational load and speeding up inference.
19+
The idea is simple: By caching the KV cache of an existing query, a new query that shares the same prefix can skip recomputing that part of the prompt. Instead, it directly reuses the cached results.
20+
21+
Prefix caching is different from simple semantic caching, where the full input and output text are stored in a database and only exact match (or similar queries) can hit the cache and return immediately.
22+
23+
## How does prefix caching work?
24+
25+
1. During prefill, the model performs a forward pass over the entire input and builds up a key-value (KV) cache for attention computation.
26+
2. During decode, the model generates output tokens one by one, using the cached states from the prefill stage. The attention mechanism computes a matrix of token interactions. The resulting KV pairs for each token are stored in GPU memory.
27+
3. For a new request with a matching prefix, you can skip the forward pass for the cached part and directly resume from the last token of the prefix.
28+
29+
:::important
30+
This works only when the prefix is exactly identical, including whitespace and formatting. Even a single character difference breaks the cache.
31+
:::
2032

2133
For example, consider a chatbot with this system prompt:
2234

@@ -26,34 +38,51 @@ You are a helpful AI writer. Please write in a professional manner.
2638

2739
This prompt doesn’t change from one conversation to the next. Instead of recalculating it every time, you store its KV cache once. Then, when new messages come in, you reuse this stored prefix cache, only processing the new part of the prompt.
2840

29-
## Prefix cache-aware routing
41+
## What is the difference between KV caching and prefix caching?
42+
43+
KV caching is used to store the intermediate attention states of each token in GPU memory. It was originally used to describe caching within a **single inference request**, especially critical for speeding up the decoding stage.
44+
45+
LLMs work autoregressively during decode as they output the next new token based on the previously generated tokens (i.e. reusing their KV cache). Without the KV cache, the model needs to recompute everything for the previous tokens in each decode step (and the context grows with every step), which would be a huge waste of resources.
46+
47+
When extending this caching concept across **multiple requests**, it’s more accurate to call it prefix caching. Since the computation of the KV cache only depends on all previous tokens, different requests with identical prefixes can reuse the same cache of the prefix tokens and avoid recomputing them.
48+
49+
## How to structure prompts for maximum cache hits
3050

31-
In practice, applying prefix caching still has challenges. For example:
51+
Prefix caching only helps when prompts are consistent. Here are some best practices to maximize cache hit rates:
3252

33-
- How can a new request be routed to the worker that already has the right prefix cached?
34-
- How does the router know what’s in each worker’s cache?
53+
- **Front-load static content**: Place any constant or rarely changing information at the beginning of your prompt. This could include system messages, context, or instructions that remain the same across multiple queries. Move dynamic or user-specific content to the end of your prompt.
54+
- **Batch similar requests**: Group together queries (especially when serving multiple users or agents) that share the same prefix so that cached results can be reused efficiently.
55+
- **Avoid dynamic elements in the prefix**: Don’t insert timestamps, request IDs, or any other per-request variables early in the prompt. These lower your cache hit rate.
56+
- **Use deterministic serialization**: Make sure your context or memory serialization (e.g. JSON) is stable in key ordering and structure. Non-deterministic serialization leads to cache misses even if the content is logically the same.
57+
- **Monitor and analyze cache hit rates**: Regularly review your cache performance to identify opportunities for optimization.
3558

36-
![prefix-caching-aware-routing.png](./img/prefix-caching-aware-routing.png)
59+
## Adoption and performance gains
60+
61+
Prefix caching can reduce compute and latency by an order of magnitude in some use cases.
62+
63+
- Anthropic Claude Sonnet offers [prompt caching](https://www.anthropic.com/news/prompt-caching) with up to 90% cost savings and 85% latency reduction for long prompts.
64+
- Google Gemini [discounts cached tokens](https://ai.google.dev/gemini-api/docs/caching?lang=python) and charges for storage separately.
65+
- Frameworks like vLLM, TensorRT-LLM, and SGLang support automatic prefix caching for different open-source LLMs.
66+
67+
In agent workflows, the benefit is even more pronounced. Some use cases have input-to-output token ratios of 100:1, making the cost of reprocessing large prompts disproportionately high.
68+
69+
## Limitations
70+
71+
For applications with long, repetitive prompts, prefix caching can significantly reduce both latency and cost. Over time, however, your KV cache size can be quite large. GPU memory is finite, and storing long prefixes across many users can eat up space quickly. You’ll need cache eviction strategies or memory tiering.
72+
73+
The open-source community is actively working on distributed serving strategies. See [prefix cache-aware routing](./prefix-caching-cache-aware-routing) for details.
74+
75+
---
3776

38-
Different open-source projects are exploring their own approaches to prefix cache-aware routing:
77+
Optimizing LLM prefix caching requires flexible customization in your LLM serving and infrastructure stack. At Bento, we provide the infrastructure for dedicated and customizable LLM deployments with fast auto-scaling and scaling-to-zero capabilities to ensure resource efficiency.
3978

40-
- **Worker-reported prefix status**
41-
42-
[Dynamo](https://github.com/ai-dynamo/dynamo) has workers actively report which prefixes they’ve cached. The router then uses this real-time data to make smart routing decisions.
43-
44-
- **Router-predicted cache status**
45-
46-
[SGLang](https://github.com/sgl-project/sglang) maintains an approximate radix tree for each worker based on past requests. This helps the router predict which worker is most likely to have the needed prefix, without constant updates from the workers.
47-
48-
- **Hybrid efforts**
49-
- The Gateway API Inference Extension project is [exploring multiple strategies to implement a routing algorithm on EPP](https://github.com/kubernetes-sigs/gateway-api-inference-extension/issues/498):
50-
- **Prefix affinity consistent hashing**: Group requests with similar prefixes to the same worker.
51-
- **Approximate prefix cache on the router**: Let the router maintain an approximate lookup cache of the prefix caches on all the backend servers.
52-
- **Accurate prefix cache on the router**: Gather KV cache information reported by model servers.
53-
- The [llm-d](https://github.com/llm-d/llm-d) project uses a component called Inference Scheduler to implement filtering and scoring algorithms, and makes routing decisions based on a combination of factors like cache availability, prefill/decode status, SLA and load.
79+
<div style={{ margin: '3rem 0' }}>
80+
[<Button>Talk to us</Button>](https://l.bentoml.com/contact-us-llm-inference-handbook)
81+
</div>
5482

5583
<LinkList>
5684
## Additional resources
5785
* [Prompt Cache: Modular Attention Reuse for Low-Latency Inference](https://arxiv.org/abs/2311.04934)
5886
* [Prompt Caching in Claude](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching)
87+
* [Design Around the KV-Cache](https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus)
5988
</LinkList>

0 commit comments

Comments
 (0)