chatde/tokenshrink
Same AI, fewer tokens. Free forever. — tokenshrink.com
Performs multi-phase text compression (phrase removal, word abbreviation, pattern detection) verified against the cl100k_base tokenizer to guarantee real token savings—no LLM calls or latency. Includes a "Rosetta Stone" decoder header so compressed prompts remain understandable to any model, with domain-specific strategies (code, medical, legal, business) and pluggable tokenizer support for exact counts across different LLMs.
Available on npm.
Stars
7
Forks
—
Language
JavaScript
License
MIT
Category
Last pushed
Mar 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/chatde/tokenshrink"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
connectaman/LoPace
LoPace is a bi-directional encoding framework designed to reduce the storage footprint of...
roli-lpci/lintlang
Static linter for AI agent tool descriptions, system prompts, and configs. Catches vague...
LakshmiN5/promptqc
ESLint for your system prompts — catch contradictions, anti-patterns, injection vulnerabilities,...
sbsaga/toon
TOON — Laravel AI package for compact, human-readable, token-efficient data format with JSON ⇄...
nooscraft/tokuin
CLI tool – estimates LLM tokens/costs and runs provider-aware load tests for OpenAI, Anthropic,...