chopratejas/headroom

The Context Optimization Layer for LLM Applications

64
/ 100
Established

Automatically compresses boilerplate from tool outputs, database queries, RAG retrievals, and file reads (typically 70-95% of context) before sending to the LLM, reducing token usage while preserving accuracy. Available as a Python/TypeScript SDK function, transparent HTTP proxy, or framework integrations for LangChain, LiteLLM, Agno, and coding agents (Claude Code, Cursor, Aider). Uses statistical anomaly detection and adaptive sampling to preserve critical information—evidenced by 87-92% token savings on production workloads with no accuracy loss.

724 stars. Actively maintained with 160 commits in the last 30 days.

No Package No Dependents
Maintenance 25 / 25
Adoption 10 / 25
Maturity 11 / 25
Community 18 / 25

How are scores calculated?

Stars

724

Forks

72

Language

Python

License

Apache-2.0

Last pushed

Mar 13, 2026

Commits (30d)

160

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/chopratejas/headroom"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.