Skip to content

Commit 4e3defe

Browse files
authored
Support start up LoRA server without initial adapters (sgl-project#8019)
1 parent 60468da commit 4e3defe

File tree

12 files changed

+235
-140
lines changed

12 files changed

+235
-140
lines changed

docs/backend/lora.ipynb

Lines changed: 67 additions & 94 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,8 @@
2727
"source": [
2828
"The following server arguments are relevant for multi-LoRA serving:\n",
2929
"\n",
30+
"* `enable_lora`: Enable LoRA support for the model. This argument is automatically set to True if `--lora-paths` is provided for backward compatibility.\n",
31+
"\n",
3032
"* `lora_paths`: A mapping from each adaptor's name to its path, in the form of `{name}={path} {name}={path}`.\n",
3133
"\n",
3234
"* `max_loras_per_batch`: Maximum number of adaptors used by each batch. This argument can affect the amount of GPU memory reserved for multi-LoRA serving, so it should be set to a smaller value when memory is scarce. Defaults to be 8.\n",
@@ -35,7 +37,7 @@
3537
"\n",
3638
"* `max_lora_rank`: The maximum LoRA rank that should be supported. If not specified, it will be automatically inferred from the adapters provided in `--lora-paths`. This argument is needed when you expect to dynamically load adapters of larger LoRA rank after server startup.\n",
3739
"\n",
38-
"* `lora_target_modules`: The union set of all target modules where LoRA should be applied (e.g., `q_proj`, `k_proj`, `gate_proj`). If not specified, it will be automatically inferred from the adapters provided in `--lora-paths`. This argument is needed when you expect to dynamically load adapters of different target modules after server startup.\n",
40+
"* `lora_target_modules`: The union set of all target modules where LoRA should be applied (e.g., `q_proj`, `k_proj`, `gate_proj`). If not specified, it will be automatically inferred from the adapters provided in `--lora-paths`. This argument is needed when you expect to dynamically load adapters of different target modules after server startup. You can also set it to `all` to enable LoRA for all supported modules. However, enabling LoRA on additional modules introduces a minor performance overhead. If your application is performance-sensitive, we recommend only specifying the modules for which you plan to load adapters.\n",
3941
"\n",
4042
"* `tp_size`: LoRA serving along with Tensor Parallelism is supported by SGLang. `tp_size` controls the number of GPUs for tensor parallelism. More details on the tensor sharding strategy can be found in [S-Lora](https://arxiv.org/pdf/2311.03285) paper.\n",
4143
"\n",
@@ -79,6 +81,7 @@
7981
"server_process, port = launch_server_cmd(\n",
8082
" \"\"\"\n",
8183
"python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n",
84+
" --enable-lora \\\n",
8285
" --lora-paths lora0=algoprog/fact-generation-llama-3.1-8b-instruct-lora \\\n",
8386
" --max-loras-per-batch 1 --lora-backend triton \\\n",
8487
" --disable-radix-cache\n",
@@ -98,7 +101,7 @@
98101
"json_data = {\n",
99102
" \"text\": [\n",
100103
" \"List 3 countries and their capitals.\",\n",
101-
" \"AI is a field of computer science focused on\",\n",
104+
" \"List 3 countries and their capitals.\",\n",
102105
" ],\n",
103106
" \"sampling_params\": {\"max_new_tokens\": 32, \"temperature\": 0},\n",
104107
" # The first input uses lora0, and the second input uses the base model\n",
@@ -137,6 +140,7 @@
137140
"server_process, port = launch_server_cmd(\n",
138141
" \"\"\"\n",
139142
"python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n",
143+
" --enable-lora \\\n",
140144
" --lora-paths lora0=algoprog/fact-generation-llama-3.1-8b-instruct-lora \\\n",
141145
" lora1=Nutanix/Meta-Llama-3.1-8B-Instruct_lora_4_alpha_16 \\\n",
142146
" --max-loras-per-batch 2 --lora-backend triton \\\n",
@@ -157,7 +161,7 @@
157161
"json_data = {\n",
158162
" \"text\": [\n",
159163
" \"List 3 countries and their capitals.\",\n",
160-
" \"AI is a field of computer science focused on\",\n",
164+
" \"List 3 countries and their capitals.\",\n",
161165
" ],\n",
162166
" \"sampling_params\": {\"max_new_tokens\": 32, \"temperature\": 0},\n",
163167
" # The first input uses lora0, and the second input uses lora1\n",
@@ -191,11 +195,9 @@
191195
"cell_type": "markdown",
192196
"metadata": {},
193197
"source": [
194-
"### Basic Usage\n",
195-
"\n",
196198
"Instead of specifying all adapters during server startup via `--lora-paths`. You can also load & unload LoRA adapters dynamically via the `/load_lora_adapter` and `/unload_lora_adapter` API.\n",
197199
"\n",
198-
"(Please note that, currently we still require you to specify at least one adapter in `--lora-paths` to enable the LoRA feature, this limitation will be lifted soon.)"
200+
"When using dynamic LoRA loading, it's recommended to explicitly specify both `--max-lora-rank` and `--lora-target-modules` at startup. For backward compatibility, SGLang will infer these values from `--lora-paths` if they are not explicitly provided. However, in that case, you would have to ensure that all dynamically loaded adapters share the same shape (rank and target modules) as those in the initial `--lora-paths` or are strictly \"smaller\"."
199201
]
200202
},
201203
{
@@ -204,20 +206,36 @@
204206
"metadata": {},
205207
"outputs": [],
206208
"source": [
209+
"lora0 = \"Nutanix/Meta-Llama-3.1-8B-Instruct_lora_4_alpha_16\" # rank - 4, target modules - q_proj, k_proj, v_proj, o_proj, gate_proj\n",
210+
"lora1 = \"algoprog/fact-generation-llama-3.1-8b-instruct-lora\" # rank - 64, target modules - q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj\n",
211+
"lora0_new = \"philschmid/code-llama-3-1-8b-text-to-sql-lora\" # rank - 256, target modules - q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj\n",
212+
"\n",
213+
"\n",
214+
"# The `--target-lora-modules` param below is technically not needed, as the server will infer it from lora0 which already has all the target modules specified.\n",
215+
"# We are adding it here just to demonstrate usage.\n",
207216
"server_process, port = launch_server_cmd(\n",
208217
" \"\"\"\n",
209218
" python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n",
210-
" --lora-paths lora0=philschmid/code-llama-3-1-8b-text-to-sql-lora \\\n",
219+
" --enable-lora \\\n",
211220
" --cuda-graph-max-bs 2 \\\n",
212221
" --max-loras-per-batch 2 --lora-backend triton \\\n",
213222
" --disable-radix-cache\n",
223+
" --max-lora-rank 256\n",
224+
" --lora-target-modules all\n",
214225
" \"\"\"\n",
215226
")\n",
216227
"\n",
217228
"url = f\"http://127.0.0.1:{port}\"\n",
218229
"wait_for_server(url)"
219230
]
220231
},
232+
{
233+
"cell_type": "markdown",
234+
"metadata": {},
235+
"source": [
236+
"Load adapter lora0"
237+
]
238+
},
221239
{
222240
"cell_type": "code",
223241
"execution_count": null,
@@ -227,8 +245,8 @@
227245
"response = requests.post(\n",
228246
" url + \"/load_lora_adapter\",\n",
229247
" json={\n",
230-
" \"lora_name\": \"lora1\",\n",
231-
" \"lora_path\": \"Nutanix/Meta-Llama-3.1-8B-Instruct_lora_4_alpha_16\",\n",
248+
" \"lora_name\": \"lora0\",\n",
249+
" \"lora_path\": lora0,\n",
232250
" },\n",
233251
")\n",
234252
"\n",
@@ -239,38 +257,10 @@
239257
]
240258
},
241259
{
242-
"cell_type": "code",
243-
"execution_count": null,
244-
"metadata": {},
245-
"outputs": [],
246-
"source": [
247-
"response = requests.post(\n",
248-
" url + \"/generate\",\n",
249-
" json={\n",
250-
" \"text\": [\n",
251-
" \"List 3 countries and their capitals.\",\n",
252-
" \"List 3 countries and their capitals.\",\n",
253-
" ],\n",
254-
" \"sampling_params\": {\"max_new_tokens\": 32, \"temperature\": 0},\n",
255-
" \"lora_path\": [\"lora0\", \"lora1\"],\n",
256-
" },\n",
257-
")\n",
258-
"print(f\"Output from lora0: {response.json()[0]['text']}\")\n",
259-
"print(f\"Output from lora1: {response.json()[1]['text']}\")"
260-
]
261-
},
262-
{
263-
"cell_type": "code",
264-
"execution_count": null,
260+
"cell_type": "markdown",
265261
"metadata": {},
266-
"outputs": [],
267262
"source": [
268-
"response = requests.post(\n",
269-
" url + \"/unload_lora_adapter\",\n",
270-
" json={\n",
271-
" \"lora_name\": \"lora0\",\n",
272-
" },\n",
273-
")"
263+
"Load adapter lora1:"
274264
]
275265
},
276266
{
@@ -282,8 +272,8 @@
282272
"response = requests.post(\n",
283273
" url + \"/load_lora_adapter\",\n",
284274
" json={\n",
285-
" \"lora_name\": \"lora2\",\n",
286-
" \"lora_path\": \"pbevan11/llama-3.1-8b-ocr-correction\",\n",
275+
" \"lora_name\": \"lora1\",\n",
276+
" \"lora_path\": lora1,\n",
287277
" },\n",
288278
")\n",
289279
"\n",
@@ -294,24 +284,10 @@
294284
]
295285
},
296286
{
297-
"cell_type": "code",
298-
"execution_count": null,
287+
"cell_type": "markdown",
299288
"metadata": {},
300-
"outputs": [],
301289
"source": [
302-
"response = requests.post(\n",
303-
" url + \"/generate\",\n",
304-
" json={\n",
305-
" \"text\": [\n",
306-
" \"List 3 countries and their capitals.\",\n",
307-
" \"List 3 countries and their capitals.\",\n",
308-
" ],\n",
309-
" \"sampling_params\": {\"max_new_tokens\": 32, \"temperature\": 0},\n",
310-
" \"lora_path\": [\"lora1\", \"lora2\"],\n",
311-
" },\n",
312-
")\n",
313-
"print(f\"Output from lora1: {response.json()[0]['text']}\")\n",
314-
"print(f\"Output from lora2: {response.json()[1]['text']}\")"
290+
"Check inference output:"
315291
]
316292
},
317293
{
@@ -320,18 +296,29 @@
320296
"metadata": {},
321297
"outputs": [],
322298
"source": [
323-
"terminate_process(server_process)"
299+
"url = f\"http://127.0.0.1:{port}\"\n",
300+
"json_data = {\n",
301+
" \"text\": [\n",
302+
" \"List 3 countries and their capitals.\",\n",
303+
" \"List 3 countries and their capitals.\",\n",
304+
" ],\n",
305+
" \"sampling_params\": {\"max_new_tokens\": 32, \"temperature\": 0},\n",
306+
" # The first input uses lora0, and the second input uses lora1\n",
307+
" \"lora_path\": [\"lora0\", \"lora1\"],\n",
308+
"}\n",
309+
"response = requests.post(\n",
310+
" url + \"/generate\",\n",
311+
" json=json_data,\n",
312+
")\n",
313+
"print(f\"Output from lora0: \\n{response.json()[0]['text']}\\n\")\n",
314+
"print(f\"Output from lora1 (updated): \\n{response.json()[1]['text']}\\n\")"
324315
]
325316
},
326317
{
327318
"cell_type": "markdown",
328319
"metadata": {},
329320
"source": [
330-
"### Advanced: hosting adapters of different shapes\n",
331-
"\n",
332-
"In some cases, you may want to load LoRA adapters with different ranks or target modules (e.g., `q_proj`, `k_proj`) simultaneously. To ensure the server can accommodate all expected LoRA shapes, it's recommended to explicitly specify `--max-lora-rank` and/or `--lora-target-modules` at startup.\n",
333-
"\n",
334-
"For backward compatibility, SGLang will infer these values from `--lora-paths` if they are not explicitly provided. This means it's safe to omit them **only if** all dynamically loaded adapters share the same shape (rank and target modules) as those in the initial `--lora-paths` or are strictly \"smaller\"."
321+
"Unload lora0 and replace it with a different adapter:"
335322
]
336323
},
337324
{
@@ -340,39 +327,18 @@
340327
"metadata": {},
341328
"outputs": [],
342329
"source": [
343-
"lora0 = \"Nutanix/Meta-Llama-3.1-8B-Instruct_lora_4_alpha_16\" # rank - 4, target modules - q_proj, k_proj, v_proj, o_proj, gate_proj\n",
344-
"lora1 = \"algoprog/fact-generation-llama-3.1-8b-instruct-lora\" # rank - 64, target modules - q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj\n",
345-
"\n",
346-
"\n",
347-
"# The `--target-lora-modules` param below is technically not needed, as the server will infer it from lora0 which already has all the target modules specified.\n",
348-
"# We are adding it here just to demonstrate usage.\n",
349-
"server_process, port = launch_server_cmd(\n",
350-
" f\"\"\"\n",
351-
" python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n",
352-
" --lora-paths lora0={lora0} \\\n",
353-
" --cuda-graph-max-bs 2 \\\n",
354-
" --max-loras-per-batch 2 --lora-backend triton \\\n",
355-
" --disable-radix-cache\n",
356-
" --max-lora-rank 64\n",
357-
" --lora-target-modules q_proj k_proj v_proj o_proj down_proj up_proj gate_proj\n",
358-
" \"\"\"\n",
330+
"response = requests.post(\n",
331+
" url + \"/unload_lora_adapter\",\n",
332+
" json={\n",
333+
" \"lora_name\": \"lora0\",\n",
334+
" },\n",
359335
")\n",
360336
"\n",
361-
"url = f\"http://127.0.0.1:{port}\"\n",
362-
"wait_for_server(url)"
363-
]
364-
},
365-
{
366-
"cell_type": "code",
367-
"execution_count": null,
368-
"metadata": {},
369-
"outputs": [],
370-
"source": [
371337
"response = requests.post(\n",
372338
" url + \"/load_lora_adapter\",\n",
373339
" json={\n",
374-
" \"lora_name\": \"lora1\",\n",
375-
" \"lora_path\": lora1,\n",
340+
" \"lora_name\": \"lora0\",\n",
341+
" \"lora_path\": lora0_new,\n",
376342
" },\n",
377343
")\n",
378344
"\n",
@@ -382,6 +348,13 @@
382348
" print(\"Failed to load LoRA adapter.\", response.json())"
383349
]
384350
},
351+
{
352+
"cell_type": "markdown",
353+
"metadata": {},
354+
"source": [
355+
"Check output again:"
356+
]
357+
},
385358
{
386359
"cell_type": "code",
387360
"execution_count": null,
@@ -392,7 +365,7 @@
392365
"json_data = {\n",
393366
" \"text\": [\n",
394367
" \"List 3 countries and their capitals.\",\n",
395-
" \"AI is a field of computer science focused on\",\n",
368+
" \"List 3 countries and their capitals.\",\n",
396369
" ],\n",
397370
" \"sampling_params\": {\"max_new_tokens\": 32, \"temperature\": 0},\n",
398371
" # The first input uses lora0, and the second input uses lora1\n",
@@ -402,8 +375,8 @@
402375
" url + \"/generate\",\n",
403376
" json=json_data,\n",
404377
")\n",
405-
"print(f\"Output from lora0: {response.json()[0]['text']}\")\n",
406-
"print(f\"Output from lora1: {response.json()[1]['text']}\")"
378+
"print(f\"Output from lora0: \\n{response.json()[0]['text']}\\n\")\n",
379+
"print(f\"Output from lora1 (updated): \\n{response.json()[1]['text']}\\n\")"
407380
]
408381
},
409382
{

docs/backend/server_arguments.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,8 +176,9 @@ Please consult the documentation below and [server_args.py](https://github.com/s
176176

177177
| Arguments | Description | Defaults |
178178
|-----------|-------------|----------|
179+
| `--enable-lora` | Enable LoRA support for the model. This argument is automatically set to True if `--lora-paths` is provided for backward compatibility. | False |
179180
| `--max-lora-rank` | The maximum LoRA rank that should be supported. If not specified, it will be automatically inferred from the adapters provided in `--lora-paths`. This argument is needed when you expect to dynamically load adapters of larger LoRA rank after server startup. | None |
180-
| `--lora-target-modules` | The union set of all target modules where LoRA should be applied (e.g., `q_proj`, `k_proj`, `gate_proj`). If not specified, it will be automatically inferred from the adapters provided in `--lora-paths`. This argument is needed when you expect to dynamically load adapters of different target modules after server startup. | None |
181+
| `--lora-target-modules` | The union set of all target modules where LoRA should be applied (e.g., `q_proj`, `k_proj`, `gate_proj`). If not specified, it will be automatically inferred from the adapters provided in `--lora-paths`. This argument is needed when you expect to dynamically load adapters of different target modules after server startup. You can also set it to `all` to enable LoRA for all supported modules. However, enabling LoRA on additional modules introduces a minor performance overhead. If your application is performance-sensitive, we recommend only specifying the modules for which you plan to load adapters. | None |
181182
| `--lora-paths` | The list of LoRA adapters. You can provide a list of either path in str or renamed path in the format {name}={path}. | None |
182183
| `--max-loras-per-batch` | Maximum number of adapters for a running batch, include base-only request. | 8 |
183184
| `--lora-backend` | Choose the kernel backend for multi-LoRA serving. | triton |

python/sglang/srt/lora/lora_manager.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -186,9 +186,9 @@ def validate_new_adapter(self, lora_name: str, lora_config: LoRAConfig):
186186
)
187187
if incompatible:
188188
raise ValueError(
189-
f"LoRA adapter {lora_name} with rank {lora_config.r} is incompatible with the current LoRA memory pool configuration."
190-
"We are still working on supporting dynamically updating LoRA shapes. If you expect to use adapters of different shapes, "
191-
"You can specify expected configs via --max_lora_rank and --enable_lora_modules."
189+
f"LoRA adapter {lora_name} with rank {lora_config.r} is incompatible with the current LoRA memory pool configuration. "
190+
"Please ensure that the LoRA adapter's rank is within the configured `--max_lora_rank` and that the target modules are "
191+
"included in `--enable_lora_modules`."
192192
)
193193

194194
def unload_lora_adapter(self, lora_name: str) -> LoRAUpdateResult:

python/sglang/srt/managers/tokenizer_manager.py

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -574,7 +574,7 @@ def _validate_one_request(
574574
"The server is not configured to enable custom logit processor. "
575575
"Please set `--enable-custom-logits-processor` to enable this feature."
576576
)
577-
if self.server_args.lora_paths and obj.lora_path:
577+
if self.server_args.enable_lora and obj.lora_path:
578578
self._validate_lora_adapters(obj)
579579

580580
def _validate_input_ids_in_vocab(
@@ -1037,6 +1037,10 @@ async def load_lora_adapter(
10371037
_: Optional[fastapi.Request] = None,
10381038
) -> LoadLoRAAdapterReqOutput:
10391039
self.auto_create_handle_loop()
1040+
if not self.server_args.enable_lora:
1041+
raise ValueError(
1042+
"LoRA is not enabled. Please set `--enable-lora` to enable LoRA."
1043+
)
10401044

10411045
# TODO (lifuhuang): Remove this after we verify that dynamic lora loading works
10421046
# with dp_size > 1.
@@ -1060,6 +1064,10 @@ async def unload_lora_adapter(
10601064
_: Optional[fastapi.Request] = None,
10611065
) -> UnloadLoRAAdapterReqOutput:
10621066
self.auto_create_handle_loop()
1067+
if not self.server_args.enable_lora:
1068+
raise ValueError(
1069+
"LoRA is not enabled. Please set `--enable-lora` to enable LoRA."
1070+
)
10631071

10641072
# TODO (lifuhuang): Remove this after we verify that dynamic lora loading works
10651073
# with dp_size > 1.

python/sglang/srt/model_executor/cuda_graph_runner.py

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -264,7 +264,7 @@ def __init__(self, model_runner: ModelRunner):
264264
if self.enable_torch_compile:
265265
set_torch_compile_config()
266266

267-
if self.model_runner.server_args.lora_paths is not None:
267+
if self.model_runner.server_args.enable_lora:
268268
self.model_runner.lora_manager.init_cuda_graph_batch_info(self.max_bs)
269269

270270
# Graph inputs
@@ -510,11 +510,10 @@ def capture_one_batch_size(self, bs: int, forward: Callable):
510510
spec_info.capture_hidden_mode if spec_info else CaptureHiddenMode.NULL
511511
)
512512

513-
if self.model_runner.server_args.lora_paths is not None:
514-
# Currently, if the lora_path in `lora_paths` is None, the lora backend will use a
515-
# different logic to handle lora, so we need to set `lora_paths` to a list of non-None
516-
# values if lora is enabled.
517-
lora_paths = [next(iter(self.model_runner.server_args.lora_paths))] * bs
513+
if self.model_runner.server_args.enable_lora:
514+
# It is safe to capture CUDA graph using empty LoRA path, as the LoRA kernels will always be launched whenever
515+
# `--enable-lora` is set to True (and return immediately if the LoRA path is empty for perf optimization).
516+
lora_paths = [None] * bs
518517
else:
519518
lora_paths = None
520519

python/sglang/srt/model_executor/forward_batch_info.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -418,7 +418,7 @@ def init_new(
418418
ret._compute_mrope_positions(model_runner, batch)
419419

420420
# Init lora information
421-
if model_runner.server_args.lora_paths is not None:
421+
if model_runner.server_args.enable_lora:
422422
model_runner.lora_manager.prepare_lora_batch(ret)
423423

424424
TboForwardBatchPreparer.prepare(

0 commit comments

Comments
 (0)