litellm and lm-proxy

LiteLM is a mature, feature-rich production gateway handling 100+ providers with enterprise capabilities (cost tracking, guardrails, load balancing), while lm-proxy is a lightweight, minimal OpenAI-compatible wrapper—making them direct competitors for the same use case (multi-provider LLM gateway), though targeting different scales and complexity requirements.

litellm
98
Verified
lm-proxy
62
Established
Maintenance 25/25
Adoption 25/25
Maturity 25/25
Community 23/25
Maintenance 10/25
Adoption 16/25
Maturity 24/25
Community 12/25
Stars: 38,910
Forks: 6,381
Downloads: 95,199,449
Commits (30d): 2000
Language: Python
License:
Stars: 92
Forks: 10
Downloads: 1,044
Commits (30d): 0
Language: Python
License: MIT
No risk flags
No risk flags

About litellm

BerriAI/litellm

Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]

Supports A2A agent protocols (LangGraph, Vertex AI, Bedrock, Pydantic AI) and MCP tool servers, allowing seamless integration of agentic workflows and tool ecosystems into any LLM. Implements a unified request/response format across all providers, automatically translating native API schemas to OpenAI-compatible endpoints for interoperability. Available as both a Python SDK and stateless proxy server that can be deployed independently or containerized as a central gateway for multi-tenant LLM access.

About lm-proxy

Nayjest/lm-proxy

OpenAI-compatible HTTP LLM proxy / gateway for multi-provider inference (Google, Anthropic, OpenAI, PyTorch). Lightweight, extensible Python/FastAPI—use as library or standalone service.

Scores updated daily from GitHub, PyPI, and npm data. How scores work