LessUp/tiny-llm
Lightweight LLM Inference Engine (CUDA C++17): W8A16 Quantization, KV Cache & Multiple Sampling Strategies | 轻量级 LLM 推理引擎(CUDA C++17):W8A16 量化推理、KV Cache 管理、多种采样策略
22
/ 100
Experimental
No Package
No Dependents
Maintenance
13 / 25
Adoption
0 / 25
Maturity
9 / 25
Community
0 / 25
Stars
—
Forks
—
Language
Cuda
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/LessUp/tiny-llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ggml-org/ggml
Tensor library for machine learning
68
onnx/ir-py
Efficient in-memory representation for ONNX, in Python
55
SandAI-org/MagiCompiler
A plug-and-play compiler that delivers free-lunch optimizations for both inference and training.
49
bytedance/lightseq
LightSeq: A High Performance Library for Sequence Processing and Generation
46
R-D-BioTech-Alaska/Qelm
Qelm - Quantum Enhanced Language Model
45