muxi-ai/onellm
Unified interface for interacting with various LLMs hundreds of models, caching, fallback mechanisms, and enhanced reliability.
Provides semantic caching with intelligent deduplication to reduce API costs by 50-80%, connection pooling for sequential call optimization, and automatic fallback/retry mechanisms across 22 cloud and local providers. The library maintains OpenAI API compatibility, allowing single-import migration while exposing consistent `provider/model-name` namespacing for multi-provider workloads. Supports streaming, multi-modal inputs, and local inference via Ollama and llama.cpp alongside cloud APIs.
Available on PyPI.
Stars
44
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 10, 2026
Monthly downloads
994
Commits (30d)
0
Dependencies
10
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/muxi-ai/onellm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
mgonzs13/llama_ros
llama.cpp (GGUF LLMs) and llava.cpp (GGUF VLMs) for ROS 2
Atome-FE/llama-node
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work...
docusealco/rllama
Ruby FFI bindings for llama.cpp to run open-source LLMs such as GPT-OSS, Qwen 3, Gemma 3, and...
Rin313/StegLLM
离线的LLM文本隐写程序。Offline LLM text steganography program.