Skip to content

Commit b258be1

Browse files
sleepcootarinkk
authored andcommitted
[Feat] upgrade pytorch2.6 (sgl-project#5417)
1 parent 95fb67a commit b258be1

File tree

7 files changed

+8
-8
lines changed

7 files changed

+8
-8
lines changed

.github/workflows/pr-test-sgl-kernel.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ jobs:
8888
- name: Install
8989
run: |
9090
bash scripts/ci_install_dependency.sh
91-
pip3 install torch==2.5.1 && pip3 install pytest
91+
pip3 install torch==2.6.0 && pip3 install pytest
9292
pip3 uninstall sgl-kernel -y || true
9393
pip3 install sgl-kernel/dist/*whl --force-reinstall --no-deps
9494
pip3 list | grep sgl-kernel

benchmark/deepseek_v3/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Add [performance optimization options](#performance-optimization-options) as nee
3333

3434
```bash
3535
# Installation
36-
pip install "sglang[all]>=0.4.3" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python
36+
pip install "sglang[all]>=0.4.5.post2"
3737

3838
# Launch
3939
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-V3 --tp 8 --trust-remote-code

docker/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,6 @@ RUN python3 -m pip install --upgrade pip setuptools wheel html5lib six \
4343
fi \
4444
&& python3 -m pip install --no-cache-dir torch --index-url https://download.pytorch.org/whl/cu${CUINDEX} \
4545
&& cd sglang \
46-
&& python3 -m pip --no-cache-dir install -e "python[${BUILD_TYPE}]" --find-links https://flashinfer.ai/whl/cu${CUINDEX}/torch2.5/flashinfer-python
46+
&& python3 -m pip --no-cache-dir install -e "python[${BUILD_TYPE}]" --find-links https://flashinfer.ai/whl/cu${CUINDEX}/torch2.6/flashinfer-python
4747

4848
ENV DEBIAN_FRONTEND=interactive

docs/start/install.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,4 +164,4 @@ sky status --endpoint 30000 sglang
164164
- [FlashInfer](https://github.com/flashinfer-ai/flashinfer) is the default attention kernel backend. It only supports sm75 and above. If you encounter any FlashInfer-related issues on sm75+ devices (e.g., T4, A10, A100, L4, L40S, H100), please switch to other kernels by adding `--attention-backend triton --sampling-backend pytorch` and open an issue on GitHub.
165165
- If you only need to use OpenAI models with the frontend language, you can avoid installing other dependencies by using `pip install "sglang[openai]"`.
166166
- The language frontend operates independently of the backend runtime. You can install the frontend locally without needing a GPU, while the backend can be set up on a GPU-enabled machine. To install the frontend, run `pip install sglang`, and for the backend, use `pip install sglang[srt]`. `srt` is the abbreviation of SGLang runtime.
167-
- To reinstall flashinfer locally, use the following command: `pip install "flashinfer-python>=0.2.3" -i https://flashinfer.ai/whl/cu124/torch2.5 --force-reinstall --no-deps` and then delete the cache with `rm -rf ~/.cache/flashinfer`.
167+
- To reinstall flashinfer locally, use the following command: `pip install "flashinfer-python==0.2.3" -i https://flashinfer.ai/whl/cu124/torch2.6 --force-reinstall --no-deps` and then delete the cache with `rm -rf ~/.cache/flashinfer`.

python/pyproject.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,8 +49,8 @@ srt = [
4949
"sglang[runtime_common]",
5050
"sgl-kernel==0.0.9.post2",
5151
"flashinfer_python==0.2.3",
52-
"torch==2.5.1",
53-
"torchvision==0.20.1",
52+
"torch==2.6.0",
53+
"torchvision==0.21.0",
5454
"cuda-python",
5555
"outlines>=0.0.44,<=0.1.11",
5656
"partial_json_parser",

python/sglang/srt/layers/dp_attention.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ def memcpy_triton_kernel(
143143
src_ptr,
144144
offset_ptr,
145145
sz_ptr,
146-
offset_src,
146+
offset_src: tl.constexpr,
147147
chunk_size, # multiplied for offset and sz
148148
BLOCK_SIZE: tl.constexpr,
149149
):

scripts/ci_install_dependency.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ pip install -e "python[all]"
2323

2424
# Install additional dependencies
2525
pip install torch_memory_saver
26-
pip install transformers==4.51.0 sentence_transformers accelerate==1.4.0 peft pandas datasets timm torchaudio
26+
pip install transformers==4.51.0 sentence_transformers accelerate peft pandas datasets timm torchaudio
2727

2828
# For compling xgrammar kernels
2929
pip install cuda-python nvidia-cuda-nvrtc-cu12

0 commit comments

Comments
 (0)