connectaman/LoPace
LoPace is a bi-directional encoding framework designed to reduce the storage footprint of long-form LLM system prompts by up to 80% without any bit-level information loss, enabling efficient database indexing and rapid context retrieval.
Implements three distinct compression strategies—Zstd dictionary-based compression, BPE tokenization with binary packing, and a hybrid approach combining both—achieving variable compression ratios optimized for different use cases. The framework integrates with `tiktoken` for LLM-compatible tokenization and `zstandard` for algorithmic compression, enabling seamless integration into LLM applications and database backends. Throughput reaches 50-200 MB/s compression and 100-500 MB/s decompression with sub-10 MB memory overhead, making it suitable for production-scale prompt caching at multi-user scales.
Available on PyPI.
Stars
3
Forks
2
Language
Python
License
MIT
Category
Last pushed
Feb 17, 2026
Monthly downloads
213
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/connectaman/LoPace"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
roli-lpci/lintlang
Static linter for AI agent tool descriptions, system prompts, and configs. Catches vague...
LakshmiN5/promptqc
ESLint for your system prompts — catch contradictions, anti-patterns, injection vulnerabilities,...
sbsaga/toon
TOON — Laravel AI package for compact, human-readable, token-efficient data format with JSON ⇄...
nooscraft/tokuin
CLI tool – estimates LLM tokens/costs and runs provider-aware load tests for OpenAI, Anthropic,...
thesupermegabuff/megabuff-cli
🤖 CLI for Better prompts, transparent costs, & zero vendor lock-in. Optimize AI prompts across...