dippatel1994/Large-Language-Models-Evaluation-Benchmarks-Collection
This repository contains a list of benchmarks used by big orgs to evaluate their LLMs.
19
/ 100
Experimental
No commits in the last 6 months.
Stale 6m
No Package
No Dependents
Maintenance
0 / 25
Adoption
3 / 25
Maturity
16 / 25
Community
0 / 25
Stars
4
Forks
—
Language
—
License
MIT
Category
Last pushed
Feb 26, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/dippatel1994/Large-Language-Models-Evaluation-Benchmarks-Collection"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
stanfordnlp/axbench
Stanford NLP Python library for benchmarking the utility of LLM interpretability methods
54
aidatatools/ollama-benchmark
LLM Benchmark for Throughput via Ollama (Local LLMs)
53
LarHope/ollama-benchmark
Ollama based Benchmark with detail I/O token per second. Python with Deepseek R1 example.
53
qcri/LLMeBench
Benchmarking Large Language Models
47
THUDM/LongBench
LongBench v2 and LongBench (ACL 25'&24')
45