KenyonY/flexllm
High-Performance LLM Client for Production Batch processing with checkpoint recovery, response caching, load balancing, and cost tracking
2 stars and 1,896 monthly downloads. Available on PyPI.
Stars
2
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 17, 2026
Monthly downloads
1,896
Commits (30d)
0
Dependencies
14
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/KenyonY/flexllm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...
coaidev/coai
🚀 Next Generation Multi-tenant AI One-Stop Solution. Builtin Admin & Billing System....
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.