rtk-ai/rtk
CLI proxy that reduces LLM token consumption by 60-90% on common dev commands. Single Rust binary, zero dependencies
Filters and compresses outputs from 100+ dev commands (git, cargo, npm, pytest, docker, etc.) using smart strategies like deduplication, grouping, and truncation before piping to LLM context. Integrates transparently via shell hooks for Claude Code, Copilot, Gemini CLI, and other AI agents, automatically rewriting command invocations without model awareness. Includes specialized formatters for test runners, linters, and build systems to maximize token efficiency while preserving actionable error and status information.
6,644 stars. Actively maintained with 271 commits in the last 30 days.
Stars
6,644
Forks
367
Language
Rust
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
271
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/rtk-ai/rtk"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
jnsahaj/lumen
Beautiful git diff viewer, generate commits with AI, get summary of changes, all from the CLI
jkawamoto/ctranslate2-rs
Rust bindings for OpenNMT/CTranslate2
Topos-Labs/infiniloom
High-performance repository context generator for LLMs - Transform codebases into optimized...
mpecan/tokf
Config-driven CLI tool that compresses command output before it reaches an LLM context
mohsen1/yek
A fast Rust based tool to serialize text-based files in a repository or directory for LLM consumption