marcosgarciadata/llm-performance-benchmarker
Standardized benchmarking suite for evaluating Large Language Model latency, throughput, and accuracy.
22
/ 100
Experimental
No Package
No Dependents
Maintenance
13 / 25
Adoption
0 / 25
Maturity
9 / 25
Community
0 / 25
Stars
—
Forks
—
Language
JavaScript
License
MIT
Category
Last pushed
Mar 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/marcosgarciadata/llm-performance-benchmarker"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
stanfordnlp/axbench
Stanford NLP Python library for benchmarking the utility of LLM interpretability methods
54
aidatatools/ollama-benchmark
LLM Benchmark for Throughput via Ollama (Local LLMs)
53
LarHope/ollama-benchmark
Ollama based Benchmark with detail I/O token per second. Python with Deepseek R1 example.
53
qcri/LLMeBench
Benchmarking Large Language Models
47
THUDM/LongBench
LongBench v2 and LongBench (ACL 25'&24')
45