Strato-Ai/Spectre-ai-inference-loadbalancer
NGINX-based load balancer for compute offload to multiple AI backends with model-aware routing, GPU health monitoring, and multi-platform support (NVIDIA / Apple Silicon / CPU)
Stars
1
Forks
—
Language
HTML
License
MIT
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Strato-Ai/Spectre-ai-inference-loadbalancer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway