huawei-csl/SINQ
Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model smaller while preserving accuracy.
SINQ employs Sinkhorn-normalized dual-scaling—separate row and column scale factors—to mitigate outlier sensitivity and distribute quantization error more evenly across weight matrices, enabling high accuracy at ultra-low bit-widths (3-4 bits). It operates calibration-free and model-agnostic, supporting symmetric/asymmetric quantization and NF4, while integrating natively into Hugging Face Transformers via `SinqConfig` and offering pre-quantized GGUF models through a dedicated HuggingFace collection.
602 stars and 251 monthly downloads. Available on PyPI.
Stars
602
Forks
50
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 23, 2026
Monthly downloads
251
Commits (30d)
0
Dependencies
14
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/huawei-csl/SINQ"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
stackblogger/bitnet.js
BitNet.Js - A node.js implementation of the microsoft bitnet.cpp inference framework.
SILX-LABS/QUASAR-SUBNET
QUASAR is a long-context foundation model and decentralized evaluation subnet built on Bittensor,
AnswerDotAI/cold-compress
Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking...
FMInference/H2O
[NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.
m96-chan/0xBitNet
Run BitNet b1.58 ternary LLMs with WebGPU — in browsers and native apps