-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Support token-level quantization for EP MoE #6782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @ch-wan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team, gemini-code-assist here to provide a summary of this pull request. This PR introduces support for token-level quantization within the Expert Parallel (EP) implementation of Mixture-of-Experts (MoE) layers. The primary goal is to enable more granular quantization, specifically for dynamic FP8 activation quantization, by calculating and applying scales on a per-token basis rather than per-expert or globally. This involves modifications to the underlying Triton kernels used for pre-reordering and grouped GEMM operations, as well as updates to the EPMoELayer
and GroupedGemmRunner
classes to handle and pass these per-token scales.
Highlights
- Token-Level Quantization: Adds the capability to perform quantization, particularly dynamic FP8 activation quantization, at the individual token level for EP MoE layers.
- Triton Kernel Modifications: Updates the
pre_reorder_triton_kernel
andgrouped_gemm_triton_kernel
to accept and utilize per-token scales when theuse_per_token_if_dynamic
flag is enabled. - Per-Token Scale Calculation: Modifies the
EPMoELayer
to calculate per-token maximum values for dynamic activation quantization and derive per-token scales (w13_input_scale
,w2_input_scale
). - Conditional Logic: Introduces a
use_per_token_if_dynamic
boolean flag throughout the relevant classes and kernels to conditionally enable or disable the per-token quantization behavior.
Changelog
Click here to see the changelog
- python/sglang/srt/layers/moe/ep_moe/kernels.py
- Added
use_per_token_if_dynamic
constant parameter topre_reorder_triton_kernel
(Diff 1). - Modified scale loading in
pre_reorder_triton_kernel
to load per-token scale if enabled (Diff 2). - Added
use_per_token_if_dynamic
constant parameter togrouped_gemm_triton_kernel
(Diff 3). - Modified
scale_a
loading ingrouped_gemm_triton_kernel
to load per-token scale if enabled for FP8 (Diff 4). - Added
use_per_token_if_dynamic
parameter togrouped_gemm_triton
function (Diff 5). - Added assertion in
grouped_gemm_triton
to check scale shape when per-token is enabled (Diff 6). - Passed
use_per_token_if_dynamic
to kernel call ingrouped_gemm_triton
(Diff 7).
- Added
- python/sglang/srt/layers/moe/ep_moe/layer.py
- Imported
sglang_per_token_quant_fp8
(Diff 1). - Added
use_per_token_if_dynamic
parameter toGroupedGemmRunner
constructor (Diff 2). - Passed
use_per_token_if_dynamic
togrouped_gemm_triton
call inGroupedGemmRunner.forward
(Diff 3). - Added
use_per_token_if_dynamic
parameter toEPMoELayer
constructor (Diff 4, Diff 5). - Passed
use_per_token_if_dynamic
toGroupedGemmRunner
constructor call inEPMoELayer.forward
(Diff 6). - Modified
w13_input_scale
calculation for dynamic quantization to be per-token if enabled (Diff 7). - Added logic to reorder and assign per-token
w13_input_scale
after pre-reorder if enabled (Diff 8). - Modified
down_input
initialization andw2_input_scale
handling for dynamic per-token quantization (Diff 9). - Added call to
sglang_per_token_quant_fp8
to quantizedown_input
and getw2_input_scale
if dynamic per-token is enabled (Diff 10).
- Imported
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Quantize by the token,
Speed up the MoE,
Scales for each input,
Faster inference, you see.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for token-level quantization in the EP MoE layer, specifically for dynamic activation scaling in FP8. The changes involve adding a new flag use_per_token_if_dynamic
to control this behavior in both the Python layer logic and the underlying Triton kernels. The implementation correctly branches based on this flag to apply per-token scaling where appropriate.
The core logic for calculating and applying per-token scales seems correctly implemented within the diffs. However, the introduction of this new mode adds complexity to the existing code paths, particularly in the EPMoE.forward
method and the Triton kernels. Thorough testing and clear documentation are crucial for maintaining the quality and understanding of these changes.
Overall, this is a valuable feature addition, but there are several areas related to testing, documentation, and code maintainability that should be addressed before merging.
Summary of Findings
- Lack of Unit Tests: The pull request description indicates that unit tests should be added, but the corresponding checklist item is unchecked. New features, especially those involving performance-sensitive kernels and complex logic like per-token quantization, require comprehensive unit tests to ensure correctness and prevent regressions. This is a high-priority item.
- Increased Code Complexity: The introduction of the
use_per_token_if_dynamic
flag and the associated conditional logic inEPMoE.forward
and the Triton kernels significantly increases the complexity of the code. This impacts maintainability and readability. While some complexity is inherent to adding a new quantization mode, consider if any parts can be refactored or simplified, perhaps by unifying data structures or kernel interfaces where possible. - Potential Kernel Performance Implications: Conditional memory loads and branching within performance-critical loops in the Triton kernels (
pre_reorder_triton_kernel
,grouped_gemm_triton_kernel
) might have performance implications. While Triton is capable of handling branches, minimizing them in hot loops is generally beneficial. Further profiling and potential kernel optimization might be needed. - Lack of Documentation: The pull request description indicates that documentation should be updated, but the corresponding checklist item is unchecked. The new
use_per_token_if_dynamic
flag, its purpose, and how it affects the quantization behavior should be clearly documented, both in code comments/docstrings and potentially in user-facing documentation.
Merge Readiness
This pull request introduces a valuable new feature for token-level quantization. However, based on the review, there are high-priority issues related to the lack of unit tests and medium-priority issues concerning increased code complexity and lack of documentation. I recommend addressing these findings before merging. Please ensure comprehensive tests are added to cover the new logic and kernels, and update the documentation to reflect the changes. I am unable to approve this pull request; please have other reviewers assess the changes and approve before merging.
if a1_scales_ptr is not None and use_per_token_if_dynamic: | ||
scale = 1.0 / tl.load(a1_scales_ptr + src_idx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This conditional load is placed inside the main loop iterating over topk
. While Triton can handle branching, placing a conditional memory load inside a performance-critical loop like this might potentially impact performance or increase register pressure compared to loading outside the loop if possible. Could this load be moved outside the loop, perhaps by pre-calculating the scale for all tokens before the kernel launch if use_per_token_if_dynamic
is true?
if a1_scales_ptr is not None: | ||
scale = 1.0 / tl.load(a1_scales_ptr + expert_id - start_expert_id) | ||
if not use_per_token_if_dynamic: | ||
scale = 1.0 / tl.load(a1_scales_ptr + expert_id - start_expert_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if use_per_token_if_dynamic: | ||
scale_a_value = tl.load(scale_a + (m_range_start + offs_am[:, None])) | ||
else: | ||
scale_a_value = tl.load(scale_a + expert_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This conditional load of scale_a_value
based on use_per_token_if_dynamic
adds branching within the post-GEMM scaling logic. While this block is outside the main accumulation loop, it still adds complexity. Is there a way to unify the scale loading or application logic slightly, perhaps by ensuring scale_a
always has the expected shape/layout for the chosen mode before calling the kernel?
if use_per_token_if_dynamic: | ||
assert ( | ||
scale_a.shape[0] == a.shape[0] | ||
), f"scale_a.shape: {scale_a.shape}, a.shape: {a.shape}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assertion is a good check, but the logic for handling scale_a
strides in the kernel call below (lines 734-735) uses ternary operators based on scale_a.ndim
. This suggests the scale_a
tensor might have different shapes/dimensions depending on whether per-token or per-expert scaling is used. This adds complexity to the kernel interface and the calling code. Could the scale_a
tensor always be reshaped or prepared into a consistent format before calling the kernel, simplifying the kernel's stride logic?
if self.activation_scheme == "dynamic" and not self.use_block_quant: | ||
max_value = ( | ||
torch.max(hidden_states) | ||
.repeat(self.num_experts_per_partition) | ||
.to(torch.float32) | ||
) | ||
self.w13_input_scale = max_value / torch.finfo(self.fp8_dtype).max | ||
if self.use_per_token_if_dynamic: | ||
max_value = torch.max(hidden_states, dim=1).values.to(torch.float32) | ||
self.w13_input_scale = max_value / torch.finfo(self.fp8_dtype).max | ||
else: | ||
max_value = ( | ||
torch.max(hidden_states) | ||
.repeat(self.num_experts_per_partition) | ||
.to(torch.float32) | ||
) | ||
self.w13_input_scale = max_value / torch.finfo(self.fp8_dtype).max |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if ( | ||
self.activation_scheme == "dynamic" | ||
and not self.use_block_quant | ||
and self.use_per_token_if_dynamic | ||
): | ||
scale = torch.empty( | ||
hidden_states_shape[0] * self.top_k, | ||
device=hidden_states_device, | ||
dtype=torch.float32, | ||
) | ||
scale[src2dst] = ( | ||
self.w13_input_scale.unsqueeze(1) | ||
.expand(hidden_states_shape[0], self.top_k) | ||
.reshape(-1) | ||
) | ||
self.w13_input_scale = scale |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block adds logic to create and populate the w13_input_scale
tensor specifically for the per-token dynamic case after the pre_reorder_triton_kernel
. This is necessary to align the per-token scales with the reordered tokens using src2dst
. This adds significant complexity to the forward pass state management. Could this scale preparation potentially be integrated into the pre_reorder_triton_kernel
itself, or perhaps handled earlier?
if self.activation_scheme == "dynamic" and not self.use_block_quant: | ||
self.w2_input_scale = None | ||
down_input = torch.empty( | ||
gateup_output.shape[0], | ||
gateup_output.shape[1] // 2, | ||
device=gateup_output.device, | ||
dtype=hidden_states_dtype, | ||
) | ||
else: | ||
down_input = torch.empty( | ||
gateup_output.shape[0], | ||
gateup_output.shape[1] // 2, | ||
device=gateup_output.device, | ||
dtype=( | ||
self.fp8_dtype | ||
if (self.use_fp8_w8a8 and not self.use_block_quant) | ||
else hidden_states_dtype | ||
), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if self.activation_scheme == "dynamic" and not self.use_block_quant: | ||
if self.use_per_token_if_dynamic: | ||
down_input, self.w2_input_scale = sglang_per_token_quant_fp8(down_input) | ||
else: | ||
self.w2_input_scale = torch.ones( | ||
self.num_experts_per_partition, | ||
dtype=torch.float32, | ||
device=hidden_states_device, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merge branch 'sgl_20250610_sync_tag047 of git@code.alipay.com:Theta/SGLang.git into main https://code.alipay.com/Theta/SGLang/pull_requests/52 Reviewed-by: 剑川 <jianchuan.gys@antgroup.com> * [Bugfix] Fix slice operation when chunk size mismatch (sgl-project#6697) * [Bugfix] Fix ChatCompletion endpoint of mini_lb when stream is set (sgl-project#6703) * [CI] Fix setup of disaggregation with different tp (sgl-project#6706) * [PD] Remove Unnecessary Exception Handling for FastQueue.get() (sgl-project#6712) * Fuse routed_scaling_factor in DeepSeek (sgl-project#6710) * Overlap two kernels in DeepSeek with communication (sgl-project#6711) * Minor refactor two-batch overlap (sgl-project#6682) * Speed up when having padding tokens two-batch overlap (sgl-project#6668) * [Feature] Support Flashinfer fp8 blockwise GEMM kernel on Blackwell (sgl-project#6479) * Fix LoRA bench (sgl-project#6719) * temp * Fix PP for Qwen3 MoE (sgl-project#6709) * [feat] triton kernel for get_last_loc (sgl-project#6676) * [fix] more mem for draft_extend cuda_graph (sgl-project#6726) * [PD] bug fix: Update status if nixl receiver send a a dummy req. (sgl-project#6720) * Tune memory arguments on B200 (sgl-project#6718) * Add DeepSeek-R1-0528 function call chat template (sgl-project#6725) * refactor(tool call): Fix BaseFormatDetector tool_index issue and refactor `parse_streaming_increment` (sgl-project#6715) * Add draft extend CUDA graph for Triton backend (sgl-project#6705) * refactor apply_w8a8_block_fp8_linear in fp (sgl-project#6545) * [PD] Support completion endpoint (sgl-project#6729) * PD Rust LB (PO2) (sgl-project#6437) * Super tiny enable sole usage of expert distribution metrics and update doc (sgl-project#6680) * Support picking variants of EPLB algorithms (sgl-project#6728) * Support tuning DeepEP configs (sgl-project#6742) * [test] add ut and bm for get_last_loc (sgl-project#6746) * Fix mem_fraction_static for AMD CI (sgl-project#6748) * [fix][RL] Fix DeepSeekV3ForCausalLM.post_load_weights for multiple update weight (sgl-project#6265) * Improve EPLB logical to physical dispatch map (sgl-project#6727) * Update DeepSeek-R1-0528 function call chat template (sgl-project#6765) * [PD] Optimize time out logic and add env var doc for mooncake (sgl-project#6761) * Fix aiohttp 'Chunk too big' in bench_serving (sgl-project#6737) * Support sliding window in triton backend (sgl-project#6509) * Fix shared experts fusion error (sgl-project#6289) * Fix one bug in the grouped-gemm triton kernel (sgl-project#6772) * update llama4 chat template and pythonic parser (sgl-project#6679) * feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream (sgl-project#6784) * Support token-level quantization for EP MoE (sgl-project#6782) * Temporarily lower mmlu threshold for triton sliding window backend (sgl-project#6785) * ci: relax test_function_call_required (sgl-project#6786) * Add intel_amx backend for Radix Attention for CPU (sgl-project#6408) * Fix incorrect LoRA weight loading for fused gate_up_proj (sgl-project#6734) * fix(PD-disaggregation): Can not get local ip (sgl-project#6792) * [FIX] mmmu bench serving result display error (sgl-project#6525) (sgl-project#6791) * Bump torch to 2.7.0 (sgl-project#6788) * chore: bump sgl-kernel v0.1.5 (sgl-project#6794) * Improve profiler and integrate profiler in bench_one_batch_server (sgl-project#6787) * chore: upgrade sgl-kernel v0.1.5 (sgl-project#6795) * [Minor] Always append newline after image token when parsing chat message (sgl-project#6797) * Update CI tests for Llama4 models (sgl-project#6421) * [Feat] Enable PDL automatically on Hopper architecture (sgl-project#5981) * chore: update blackwell docker (sgl-project#6800) * misc: cache is_hopper_arch (sgl-project#6799) * Remove contiguous before Flashinfer groupwise fp8 gemm (sgl-project#6804) * Correctly abort the failed grammar requests & Improve the handling of abort (sgl-project#6803) * [EP] Add cuda kernel for moe_ep_pre_reorder (sgl-project#6699) * Add draft extend CUDA graph for flashinfer backend (sgl-project#6805) * Refactor CustomOp to avoid confusing bugs (sgl-project#5382) * Tiny log prefill time (sgl-project#6780) * Tiny fix EPLB assertion about rebalancing period and recorder window size (sgl-project#6813) * Add simple utility to dump tensors for debugging (sgl-project#6815) * Fix profiles do not have consistent names (sgl-project#6811) * Speed up rebalancing when using non-static dispatch algorithms (sgl-project#6812) * [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (sgl-project#6093) * [Router] Fix k8s Service Discovery (sgl-project#6766) * Add CPU optimized kernels for topk and rope fusions (sgl-project#6456) * fix new_page_count_next_decode (sgl-project#6671) * Fix wrong weight reference in dynamic EPLB (sgl-project#6818) * Minor add metrics to expert location updater (sgl-project#6816) * [Refactor] Rename `n_share_experts_fusion` as `num_fused_shared_experts` (sgl-project#6735) * [FEAT] Add transformers backend support (sgl-project#5929) * [fix] recover auto-dispatch for rmsnorm and rope (sgl-project#6745) * fix ep_moe_reorder kernel bugs (sgl-project#6858) * [Refactor] Multimodal data processing for VLM (sgl-project#6659) * Decoder-only Scoring API (sgl-project#6460) * feat: add dp-rank to KV events (sgl-project#6852) * Set `num_fused_shared_experts` as `num_shared_experts` when shared_experts fusion is not disabled (sgl-project#6736) * Fix one missing arg in DeepEP (sgl-project#6878) * Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (sgl-project#6861) * support 1 shot allreduce in 1-node and 2-node using mscclpp (sgl-project#6277) * Fix Qwen3MoE missing token padding optimization (sgl-project#6820) * Tiny update error hints (sgl-project#6846) * Support layerwise rebalancing experts (sgl-project#6851) * Tiny allow profiler API to auto create directory (sgl-project#6865) * Support Blackwell DeepEP docker images (sgl-project#6868) * [EP] Add cuda kernel for moe_ep_post_reorder (sgl-project#6837) * [theta]merge 0605 * oai: fix openAI client error with single request via batch api (sgl-project#6170) * [PD] Fix potential perf spike caused by tracker gc and optimize doc (sgl-project#6764) * Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (sgl-project#6890) * [CUTLASS-FP4-MOE] Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (sgl-project#6887) * bugfix(OAI): Fix image_data processing for jinja chat templates (sgl-project#6877) * [CPU] enable CI for PRs, add Dockerfile and auto build task (sgl-project#6458) * AITER backend extension and workload optimizations (sgl-project#6838) * [theta]merge * [theta]merge * [Feature] Support Flashinfer fmha on Blackwell (sgl-project#6930) * Fix a bug in abort & Improve docstrings for abort (sgl-project#6931) * Tiny support customize DeepEP max dispatch tokens per rank (sgl-project#6934) * Sync the changes on cuda graph runners (sgl-project#6932) * [PD] Optimize transfer queue forward logic for dummy rank (sgl-project#6922) * [Refactor] image data process in bench_serving (sgl-project#6879) * [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (sgl-project#6767) * Add triton fused moe kernel config for E=257 on B200 (sgl-project#6939) * [sgl-kernel] update deepgemm (sgl-project#6942) * chore: bump sgl-kernel v0.1.6 (sgl-project#6943) * Minor compile fused topk (sgl-project#6944) * [Bugfix] pipeline parallelism and Eagle Qwen2 (sgl-project#6910) * Tiny re-introduce profile id logging (sgl-project#6912) * Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (sgl-project#5955) * reduce torch.zeros overhead in moe align block size kernel (sgl-project#6369) * chore: upgrade sgl-kernel v0.1.6 (sgl-project#6945) * add fbgemm moe grouped gemm kernel benchmark (sgl-project#6924) * [Docker] Add docker file for SGL Router (sgl-project#6915) * Disabling mixed chunked prefill when eagle is enabled (sgl-project#6874) * Add canary for EPLB rebalancing (sgl-project#6895) * Refactor global_server_args_dict (sgl-project#6866) * Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220) * Update server timeout time in AMD CI. (sgl-project#6953) * [misc] add is_cpu() (sgl-project#6950) * Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (sgl-project#6885) * Add a CUDA kernel for fusing mapping and weighted sum for MoE. (sgl-project#6916) * chore: bump sgl-kernel v0.1.6.post1 (sgl-project#6955) * chore: upgrade sgl-kernel v0.1.6.post1 (sgl-project#6957) * [DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (sgl-project#6853) * Revert "Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)" (sgl-project#6968) * [AMD] Add more tests to per-commit-amd (sgl-project#6926) * chore: bump sgl-kernel v0.1.7 (sgl-project#6963) * Slightly improve the sampler to skip unnecessary steps (sgl-project#6956) * rebase h20 fused_moe config (sgl-project#6966) * Fix CI and triton moe Configs (sgl-project#6974) * Remove unnecessary kernels of num_token_non_padded (sgl-project#6965) * Extend cuda graph capture bs for B200 (sgl-project#6937) * Fuse routed scaling factor in deepseek (sgl-project#6970) * Sync cuda graph runners (sgl-project#6976) * Fix draft extend ut stability with flush cache (sgl-project#6979) * Fix triton sliding window test case (sgl-project#6981) * Fix expert distribution dumping causes OOM (sgl-project#6967) * Minor remove one kernel for DeepSeek (sgl-project#6977) * [perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (sgl-project#6929) * Enable more unit tests for AMD CI. (sgl-project#6983) * Use torch.compile to fuse flash attention decode metadata preparation (sgl-project#6973) * Eliminate stream sync to speed up LoRA batch init (sgl-project#6960) * support qwen3 emebedding (sgl-project#6990) * Fix torch profiler bugs for bench_offline_throughput.py (sgl-project#6557) * chore: upgrade flashinfer v0.2.6.post1 jit (sgl-project#6958) * cleanup tmp dir (sgl-project#7007) * chore: update pr test xeon (sgl-project#7008) * Fix cutlass MLA gets almost zero accuracy (sgl-project#6998) * Update amd nightly models CI. (sgl-project#6992) * feat: add direct routing strategy to DP worker (sgl-project#6884) * Fallback to lower triton version for unfound fused moe configs (sgl-project#7013) * Fix torchvision version for Blackwell (sgl-project#7015) * Simplify prepare_extend_after_decode (sgl-project#6987) * Migrate to assertEqual (sgl-project#6741) * Fix torch version in blackwell dockerfile (sgl-project#7017) * chore: update pr test xeon (sgl-project#7018) * Update default settings for blackwell (sgl-project#7023) * Support both approximate and exact expert distribution collection (sgl-project#6964) * Add decode req pool (sgl-project#6980) * [theta]merge 0610 * [theta]merge 0610 * [CI] Add CI workflow for sgl-router docker build (sgl-project#7027) * Fix fused_moe triton configs (sgl-project#7029) * CPU: map changes from developing branch in sgl-kernel (sgl-project#6833) * chore: bump v0.4.7 (sgl-project#7038) * Update README.md (sgl-project#7040)
Motivation
Modifications
Checklist