Huzaifa785/context-compressor
AI-powered text compression library for RAG systems and API calls. Reduce token usage by up to 50-60% while preserving semantic meaning with advanced compression strategies.
Implements four distinct compression strategies (extractive, abstractive, semantic, hybrid) leveraging transformer models like BERT and BART, with query-aware optimization to prioritize relevant content. Provides comprehensive quality evaluation via ROUGE scores, semantic similarity, and entity preservation metrics alongside token counting. Integrates directly with LangChain pipelines, OpenAI/Anthropic APIs, and includes a production-ready FastAPI service with Docker deployment support.
Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Stars
80
Forks
13
Language
Python
License
MIT
Category
Last pushed
Aug 16, 2025
Commits (30d)
0
Dependencies
26
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/Huzaifa785/context-compressor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
chopratejas/headroom
The Context Optimization Layer for LLM Applications
Meirtz/Awesome-Context-Engineering
🔥 Comprehensive survey on Context Engineering: from prompt engineering to production-grade AI...
puppyone-ai/puppyone
The context file system for agents. Connect, govern, and share context across all agents.
redleaves/context-keeper
🧠 LLM-Driven Intelligent Memory & Context Management System (AI记忆管理与智能上下文感知平台) AI记忆管理平台 |...
kriasoft/srcpack
Zero-config CLI that bundles your codebase into LLM-optimized context files. Create semantic,...