Skip to content

Add deepseek-r1 1.58bit/2.51bit/4bit quantization model support with various gpu, including g5 (8x, 12x, 16x, 24x, 48x), g6 (8x, 12x, 16x, 24x, 24x), g6e (4x, 8x, 12x, 16x, 24x, 48x) #124

Add deepseek-r1 1.58bit/2.51bit/4bit quantization model support with various gpu, including g5 (8x, 12x, 16x, 24x, 48x), g6 (8x, 12x, 16x, 24x, 24x), g6e (4x, 8x, 12x, 16x, 24x, 48x)

Add deepseek-r1 1.58bit/2.51bit/4bit quantization model support with various gpu, including g5 (8x, 12x, 16x, 24x, 48x), g6 (8x, 12x, 16x, 24x, 24x), g6e (4x, 8x, 12x, 16x, 24x, 48x) #124

Re-run triggered March 24, 2025 13:34
Status Failure
Total duration 23s
Artifacts

pre-commit.yml

on: pull_request
pre-commit
13s
pre-commit
Fit to window
Zoom out
Zoom in

Annotations

1 error
pre-commit
Process completed with exit code 1.