Skip to content

Commit f0bfff8

Browse files
committed
Revert "Fix ut mla-test-1-gpu-amd (sgl-project#4813)"
This reverts commit 668ecc6.
1 parent 1c63e79 commit f0bfff8

File tree

2 files changed

+0
-13
lines changed

2 files changed

+0
-13
lines changed

.github/workflows/pr-test-amd.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,6 @@ jobs:
8989
docker exec ci_sglang pip uninstall sgl-kernel -y || true
9090
docker exec -w /sglang-checkout/sgl-kernel ci_sglang bash -c "rm -f pyproject.toml && mv pyproject_rocm.toml pyproject.toml && python3 setup_rocm.py install"
9191
docker exec ci_sglang pip install -e "python[dev_hip]"
92-
docker exec ci_sglang pip install py-spy || true
9392
9493
docker exec -w / ci_sglang git clone https://github.com/merrymercy/human-eval.git
9594
docker exec -w /human-eval ci_sglang pip install -e .

python/sglang/srt/layers/rotary_embedding.py

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -645,18 +645,6 @@ def _compute_cos_sin_cache(self) -> torch.Tensor:
645645
cache = torch.cat((cos, sin), dim=-1)
646646
return cache
647647

648-
def forward(
649-
self,
650-
positions: torch.Tensor,
651-
query: torch.Tensor,
652-
key: torch.Tensor,
653-
offsets: Optional[torch.Tensor] = None,
654-
) -> Tuple[torch.Tensor, torch.Tensor]:
655-
if _is_cuda_available:
656-
return self.forward_cuda(positions, query, key, offsets)
657-
else:
658-
return self.forward_native(positions, query, key, offsets)
659-
660648
def forward_native(
661649
self,
662650
positions: torch.Tensor,

0 commit comments

Comments
 (0)