litellm and LLM-API-Key-Proxy
These are **competitors** — both provide OpenAI-compatible gateway abstractions across multiple LLM providers with load-balancing, but LiteLLM is the mature, battle-tested option (38k+ stars, 95M downloads) while LLM-API-Key-Proxy is an early-stage alternative that hasn't gained adoption.
About litellm
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]
Supports A2A agent protocols (LangGraph, Vertex AI, Bedrock, Pydantic AI) and MCP tool servers, allowing seamless integration of agentic workflows and tool ecosystems into any LLM. Implements a unified request/response format across all providers, automatically translating native API schemas to OpenAI-compatible endpoints for interoperability. Available as both a Python SDK and stateless proxy server that can be deployed independently or containerized as a central gateway for multi-tenant LLM access.
About LLM-API-Key-Proxy
Mirrowel/LLM-API-Key-Proxy
Universal LLM Gateway: One API, every LLM. OpenAI/Anthropic-compatible endpoints with multi-provider translation and intelligent load-balancing.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work