You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Acceptable latency depends on the use case. For example, a chatbot might require a TTFT under 500 milliseconds to feel responsive, while a code completion tool may need TTFT below 100 milliseconds for seamless developer experience. In contrast, if you're generating long reports that are reviewed once a day, then even a 30-second total latency may be perfectly acceptable. The key is to match latency targets to the pace and expectations of the task at hand.
Copy file name to clipboardExpand all lines: docs/inference-optimization/speculative-decoding.md
+28-12Lines changed: 28 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,17 +12,27 @@ import LinkList from '@site/src/components/LinkList';
12
12
13
13
# Speculative decoding
14
14
15
-
Speculative decoding is an inference-time optimization that speeds up autoregressive generation by combining a fast “draft” model with the target model.
15
+
LLMs are powerful, but their text generation is slow. The main bottleneck lies in auto-regressive decoding, where each token is generated one at a time. This sequential loop leads to high latency, as each step depends on the previous token. Additionally, while GPUs are optimized for parallelism, this sequential nature leads to underutilized compute resources during inference.
16
16
17
-
The core drivers behind this approach:
17
+
What if you could parallelize parts of the generation process, even if not all of it?
18
18
19
-
- Some tokens are easier to predict than others and can be handled by a smaller model.
20
-
- In LLM decoding, a sequential token-by-token generation process, the main bottleneck is memory bandwidth, not compute. Speculative decoding leverages spare compute capacity (due to underutilized parallelism in accelerators) to predict multiple tokens at once.
19
+
That’s where speculative decoding comes in.
21
20
22
-
The roles of the two models:
21
+
## What is speculative decoding?
23
22
24
-
-**Draft model**: A smaller, faster model (like a distilled version of the target model) proposes a draft sequence of tokens.
25
-
-**Target model**: The main model verifies the draft’s tokens and decides which to accept.
23
+
Speculative decoding is an inference-time optimization that combines two models:
24
+
25
+
-**Draft model:** A smaller, faster model (like a distilled version of the target model) proposes a draft sequence of tokens. A core driver behind this is that some tokens are easier to predict than others and can be easily handled by a smaller model.
26
+
-**Target model:** The original larger model verifies the draft’s tokens at once and decides which to accept.
27
+
28
+
The draft model delivers fast guesses, and the target model ensures accuracy. This method helps shift the generation loop from purely sequential to partially parallel, improving hardware utilization and reducing latency.
29
+
30
+
Two key metrics in speculative decoding:
31
+
32
+
-**Acceptance rate**: Number of draft tokens accepted by the target model. A low acceptance rate limits the speedup and can become a major bottleneck.
33
+
-**Speculative token count**: Number of tokens proposed by the draft model each step. Most inference frameworks allow you to configure this value when speculative decoding is enabled.
34
+
35
+
## How it works
26
36
27
37
Here’s the step-by-step process:
28
38
@@ -34,17 +44,23 @@ Here’s the step-by-step process:
34
44
35
45

36
46
37
-
Key benefits of speculative decoding:
47
+
## Benefits and limitations
48
+
49
+
Key benefits of speculative decoding include:
50
+
51
+
-**Parallel verification:** Since verification doesn’t depend on previous verifications, it’s faster than generation (which is sequential).
52
+
-**High acceptance for easy tokens:** The draft model can often get the next few tokens correct, which speeds up generation.
53
+
-**Better use of hardware:** Because verification uses hardware resources that would otherwise be idle, overall throughput improves.
38
54
39
-
-**Parallel verification**: Since verification doesn’t depend on previous verifications, it’s faster than generation (which is sequential).
40
-
-**High acceptance for easy tokens**: The draft model can often get the next few tokens correct, which speeds up generation.
41
-
-**Better use of hardware**: Because verification uses hardware resources that would otherwise be idle, overall throughput improves.
55
+
However, speculative decoding has its own costs.
42
56
43
-
However, speculative decoding has its own costs. Because both the draft model and the target model need to be loaded into memory, it increases overall VRAM usage. This reduces the available memory for other tasks (e.g., batch processing), which can limit throughput, especially under high load or when serving large models.
57
+
-**Increased memory usage**: Because both the draft model and the target model need to be loaded into memory, it increases overall VRAM usage. This reduces the available memory for other tasks (e.g., batch processing), which can limit throughput, especially under high load or when serving large models.
58
+
-**Wasted compute on rejection**: If many draft tokens are rejected (low acceptance rate), compute is wasted on both drafting and verification.
44
59
45
60
<LinkList>
46
61
## Additional resources
47
62
*[Looking back at speculative decoding](https://research.google/blog/looking-back-at-speculative-decoding/)
0 commit comments