gpustack and xllm
These tools are competitors, with GPUStack offering a platform for managing and tuning various inference engines like vLLM or SGLang on GPUs, while xLLM is presented as a high-performance inference engine itself, optimized for various AI accelerators.
Maintenance
25/25
Adoption
10/25
Maturity
16/25
Community
20/25
Maintenance
25/25
Adoption
10/25
Maturity
15/25
Community
22/25
Stars: 4,630
Forks: 472
Downloads: —
Commits (30d): 95
Language: Python
License: Apache-2.0
Stars: 1,081
Forks: 149
Downloads: —
Commits (30d): 136
Language: C++
License: —
No Package
No Dependents
No Package
No Dependents
About gpustack
gpustack/gpustack
Performance-optimized AI inference on your GPUs. Unlock superior throughput by selecting and tuning engines like vLLM or SGLang.
About xllm
jd-opensource/xllm
A high-performance inference engine for LLMs, optimized for diverse AI accelerators.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work