ghoshsoham71/tokentaxi
tokentaxi is a lightweight Python library that sits between your application and your LLM providers. It intelligently routes every request based on real-time provider health, rate-limit headroom, latency, and request priority - with zero network hops and no external dependencies beyond an optional Redis connection.
Stars
—
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ghoshsoham71/tokentaxi"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AgentOps-AI/tokencost
Easy token price estimates for 400+ LLMs. TokenOps.
jmuncor/tokentap
Intercept LLM API traffic and visualize token usage in a real-time terminal dashboard. Track...
Merit-Systems/echo
The User Pays AI SDK
Ruthwik000/tokenfirewall
Scalable LLM cost enforcement middleware for Node.js with budget protection and multi-provider support
azat-io/token-limit
🛰 Monitor how many tokens your code and configs consume in AI tools. Set budgets and get alerts...