robtacconelli/Nacrith-GPU
Nacrith — Lossless text compression via ensemble neural arithmetic coding. Combines SmolLM2-135M language model with context mixing, adaptive prediction, and high-precision CDF coding. 3.1x better than gzip, and outperforming CMIX, ts_zip, LLMZip and FineZip. Fully lossless. GPU accelerated.
Stars
17
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 21, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/robtacconelli/Nacrith-GPU"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Tencent/AngelSlim
Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.
nebuly-ai/optimate
A collection of libraries to optimise AI model performances
kyo-takano/chinchilla
A toolkit for scaling law research ⚖
liyucheng09/Selective_Context
Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40%...
antgroup/glake
GLake: optimizing GPU memory management and IO transmission.