GPUforLLM/llm-vram-calculator

Accurate VRAM calculator for Local LLMs (Llama 4, DeepSeek V3, Qwen 2.5). Calculates GGUF quantization, GQA context overhead, and offloading limits

17
/ 100
Experimental
No Package No Dependents
Maintenance 6 / 25
Adoption 2 / 25
Maturity 9 / 25
Community 0 / 25

How are scores calculated?

Stars

2

Forks

Language

HTML

License

MIT

Last pushed

Nov 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/GPUforLLM/llm-vram-calculator"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.