luckystar-pear/llm-compress
Compress context data to optimize memory and performance in C++ large language model applications within the llm-cpp toolkit.
22
/ 100
Experimental
No Package
No Dependents
Maintenance
13 / 25
Adoption
0 / 25
Maturity
9 / 25
Community
0 / 25
Stars
—
Forks
—
Language
C++
License
MIT
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/luckystar-pear/llm-compress"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ggml-org/ggml
Tensor library for machine learning
68
onnx/ir-py
Efficient in-memory representation for ONNX, in Python
55
SandAI-org/MagiCompiler
A plug-and-play compiler that delivers free-lunch optimizations for both inference and training.
49
bytedance/lightseq
LightSeq: A High Performance Library for Sequence Processing and Generation
46
R-D-BioTech-Alaska/Qelm
Qelm - Quantum Enhanced Language Model
45