adysec/OllamaR
Ollama负载均衡服务器 | 一款高性能、易配置的开源负载均衡服务器,优化Ollama负载。它能够帮助您提高应用程序的可用性和响应速度,同时确保系统资源的有效利用。
Supports dynamic backend node management without restarts and deliberately blocks dangerous model manipulation endpoints (`/api/delete`, `/api/pull`, `/api/push`, etc.) to prevent unauthorized modifications. Routes requests through a proxy layer that abstracts the native Ollama service, exposing only safe inference APIs (`/api/chat`, `/api/embed`) and metadata endpoints while storing backend server configurations in an embedded database.
185 stars.
Stars
185
Forks
160
Language
—
License
GPL-3.0
Category
Last pushed
Nov 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/adysec/OllamaR"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
intelligentnode/IntelliNode
Access the latest AI models like ChatGPT, LLaMA, Deepseek, Diffusion, Hugging face, and beyond...
majiayu000/litellm-rs
A high-performance AI Gateway written in Rust — call 100+ LLM APIs using OpenAI format
wpydcr/LLM-Kit
🚀WebUI integrated platform for latest LLMs | 各大语言模型的全流程工具 WebUI...
henomis/lingoose
🪿 LinGoose is a Go framework for building awesome AI/LLM applications.
QwertyMcQwertz/monkeys-with-typewriters
The complete AI platform on a $3 microcontroller. Sub-millisecond inference. Zero hallucinations.