hofong428/Optimizing-GPU-Kernels
LLM Serving & Inference Optimization
18
/ 100
Experimental
No commits in the last 6 months.
No License
Stale 6m
No Package
No Dependents
Maintenance
0 / 25
Adoption
4 / 25
Maturity
1 / 25
Community
13 / 25
Stars
8
Forks
2
Language
—
License
—
Category
Last pushed
Oct 15, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/hofong428/Optimizing-GPU-Kernels"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm-ascend
Community maintained hardware plugin for vLLM on Ascend
76
SemiAnalysisAI/InferenceX
Open Source Continuous Inference Benchmarking Qwen3.5, DeepSeek, GPTOSS - GB200 NVL72 vs MI355X...
72
kvcache-ai/Mooncake
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
72
uccl-project/uccl
UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache...
71
sophgo/tpu-mlir
Machine learning compiler based on MLIR for Sophgo TPU.
71