connectaman/LoPace

LoPace is a bi-directional encoding framework designed to reduce the storage footprint of long-form LLM system prompts by up to 80% without any bit-level information loss, enabling efficient database indexing and rapid context retrieval.

49
/ 100
Emerging

Implements three distinct compression strategies—Zstd dictionary-based compression, BPE tokenization with binary packing, and a hybrid approach combining both—achieving variable compression ratios optimized for different use cases. The framework integrates with `tiktoken` for LLM-compatible tokenization and `zstandard` for algorithmic compression, enabling seamless integration into LLM applications and database backends. Throughput reaches 50-200 MB/s compression and 100-500 MB/s decompression with sub-10 MB memory overhead, making it suitable for production-scale prompt caching at multi-user scales.

Available on PyPI.

Maintenance 10 / 25
Adoption 8 / 25
Maturity 18 / 25
Community 13 / 25

How are scores calculated?

Stars

3

Forks

2

Language

Python

License

MIT

Last pushed

Feb 17, 2026

Monthly downloads

213

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/connectaman/LoPace"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.