huawei-csl/SINQ

Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model smaller while preserving accuracy.

66
/ 100
Established

SINQ employs Sinkhorn-normalized dual-scaling—separate row and column scale factors—to mitigate outlier sensitivity and distribute quantization error more evenly across weight matrices, enabling high accuracy at ultra-low bit-widths (3-4 bits). It operates calibration-free and model-agnostic, supporting symmetric/asymmetric quantization and NF4, while integrating natively into Hugging Face Transformers via `SinqConfig` and offering pre-quantized GGUF models through a dedicated HuggingFace collection.

602 stars and 251 monthly downloads. Available on PyPI.

Maintenance 10 / 25
Adoption 16 / 25
Maturity 24 / 25
Community 16 / 25

How are scores calculated?

Stars

602

Forks

50

Language

Python

License

Apache-2.0

Last pushed

Feb 23, 2026

Monthly downloads

251

Commits (30d)

0

Dependencies

14

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/huawei-csl/SINQ"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.