defilantech/LLMKube
Kubernetes operator for GPU-accelerated LLM inference - air-gapped, edge-native, production-ready
41
/ 100
Emerging
No Package
No Dependents
Maintenance
13 / 25
Adoption
7 / 25
Maturity
9 / 25
Community
12 / 25
Stars
29
Forks
4
Language
Go
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/defilantech/LLMKube"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
kubeflow/katib
Automated Machine Learning on Kubernetes
67
beam-cloud/beta9
Ultrafast serverless GPU inference, sandboxes, and background jobs
65
sgl-project/rbg
A workload for deploying LLM inference services on Kubernetes
60
kubeai-project/kubeai
AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports...
59
scitix/arks
Arks is a cloud-native inference framework running on Kubernetes
46