kevbuh/bitnet
pure pytorch implementation of Microsoft's BitNet b1.58 2B4T
This is a specialized Large Language Model (LLM) designed for environments where computational resources like memory and energy are extremely limited. It processes text inputs to generate human-like text outputs, similar to larger LLMs, but with significantly reduced computational demands. It's intended for AI engineers and researchers working on deploying powerful language models on constrained devices or in energy-efficient systems.
No commits in the last 6 months.
Use this if you need to deploy a capable Large Language Model (LLM) for text generation on devices with limited memory, power, or processing capabilities, such as edge devices or mobile applications.
Not ideal if you require the absolute highest precision or state-of-the-art performance in complex language tasks where resource constraints are not a primary concern.
Stars
24
Forks
—
Language
Python
License
—
Category
Last pushed
Jul 30, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kevbuh/bitnet"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
bitsandbytes-foundation/bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
intel/neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model...
dropbox/hqq
Official implementation of Half-Quadratic Quantization (HQQ)
OpenGVLab/OmniQuant
[ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.
Hsu1023/DuQuant
[NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger...