G-B-KEVIN-ARJUN/size-precision-slm-bench
is it better to run a Tiny Model (2B-4B) at High Precision (FP16/INT8), or a Large Model (8B+) at Low Precision (INT4)?" This benchmark framework allows developers to scientifically choose the best model for resource-constrained environments (consumer GPUs, laptops, edge devices) by measuring the trade-off between Speed and Intelligence
Stars
1
Forks
—
Language
Python
License
—
Category
Last pushed
Jan 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/G-B-KEVIN-ARJUN/size-precision-slm-bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Sinapsis-AI/sinapsis
Modular and Universal AI
eseckel/ai-for-grant-writing
A curated list of resources for using LLMs to develop more competitive grant applications.
amruthaa08/Generative_AI_LLMs
Generative AI with Large Language Models on Coursera offered by Deeplearning.AI and AWS.
panyatan/mergekit
🛠️ Merge pre-trained language models efficiently with `mergekit`, using minimal resources and...
futuroptimist/token.place
Peer-to-peer generative-AI platform that matches LLM users with volunteers donating spare compute.