muxi-ai/onellm

Unified interface for interacting with various LLMs hundreds of models, caching, fallback mechanisms, and enhanced reliability.

59
/ 100
Established

Provides semantic caching with intelligent deduplication to reduce API costs by 50-80%, connection pooling for sequential call optimization, and automatic fallback/retry mechanisms across 22 cloud and local providers. The library maintains OpenAI API compatibility, allowing single-import migration while exposing consistent `provider/model-name` namespacing for multi-provider workloads. Supports streaming, multi-modal inputs, and local inference via Ollama and llama.cpp alongside cloud APIs.

Available on PyPI.

Maintenance 13 / 25
Adoption 15 / 25
Maturity 24 / 25
Community 7 / 25

How are scores calculated?

Stars

44

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Mar 10, 2026

Monthly downloads

994

Commits (30d)

0

Dependencies

10

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/muxi-ai/onellm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.