Artessay/ArtQuantization
ArtQuantization is developed for quantizing Large Language Models, focusing on optimizing the memory usage and performance. This repository provides experimental results of quantizing models such as Qwen2.5 using different algorithms like AWQ and GPTQ, and demonstrates the memory requirements under various graphics card configurations.
Stars
1
Forks
—
Language
Python
License
MIT
Category
Last pushed
Oct 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Artessay/ArtQuantization"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huawei-csl/SINQ
Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method...
stackblogger/bitnet.js
BitNet.Js - A node.js implementation of the microsoft bitnet.cpp inference framework.
SILX-LABS/QUASAR-SUBNET
QUASAR is a long-context foundation model and decentralized evaluation subnet built on Bittensor,
AnswerDotAI/cold-compress
Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking...
FMInference/H2O
[NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.