jmuncor/tokentap
Intercept LLM API traffic and visualize token usage in a real-time terminal dashboard. Track costs, debug prompts, and monitor context window usage across your AI development sessions.
Acts as a local HTTP proxy that intercepts API calls by redirecting provider base URLs (e.g., `ANTHROPIC_BASE_URL=localhost:8080`), enabling zero-configuration interception without certificate installation. Supports Claude Code, OpenAI Codex, and Gemini CLI, automatically archiving each request as markdown and JSON for prompt debugging and analysis.
761 stars and 220 monthly downloads. Available on PyPI.
Stars
761
Forks
36
Language
Python
License
MIT
Category
Last pushed
Feb 02, 2026
Monthly downloads
220
Commits (30d)
0
Dependencies
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jmuncor/tokentap"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
AgentOps-AI/tokencost
Easy token price estimates for 400+ LLMs. TokenOps.
Merit-Systems/echo
The User Pays AI SDK
Ruthwik000/tokenfirewall
Scalable LLM cost enforcement middleware for Node.js with budget protection and multi-provider support
azat-io/token-limit
🛰 Monitor how many tokens your code and configs consume in AI tools. Set budgets and get alerts...
yagil/tokmon
CLI to monitor your program's OpenAI API token usage.