rtk-ai/rtk

CLI proxy that reduces LLM token consumption by 60-90% on common dev commands. Single Rust binary, zero dependencies

64
/ 100
Established

Filters and compresses outputs from 100+ dev commands (git, cargo, npm, pytest, docker, etc.) using smart strategies like deduplication, grouping, and truncation before piping to LLM context. Integrates transparently via shell hooks for Claude Code, Copilot, Gemini CLI, and other AI agents, automatically rewriting command invocations without model awareness. Includes specialized formatters for test runners, linters, and build systems to maximize token efficiency while preserving actionable error and status information.

6,644 stars. Actively maintained with 271 commits in the last 30 days.

No Package No Dependents
Maintenance 25 / 25
Adoption 10 / 25
Maturity 11 / 25
Community 18 / 25

How are scores calculated?

Stars

6,644

Forks

367

Language

Rust

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

271

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/rtk-ai/rtk"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.