RedHatResearch/conext24-NetConfEval
Benchmark for evaluating LLMs in network configuration problems.
33
/ 100
Emerging
No commits in the last 6 months.
Stale 6m
No Package
No Dependents
Maintenance
0 / 25
Adoption
7 / 25
Maturity
9 / 25
Community
17 / 25
Stars
34
Forks
8
Language
Python
License
MIT
Category
Last pushed
Mar 30, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/RedHatResearch/conext24-NetConfEval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
stanfordnlp/axbench
Stanford NLP Python library for benchmarking the utility of LLM interpretability methods
50
LarHope/ollama-benchmark
Ollama based Benchmark with detail I/O token per second. Python with Deepseek R1 example.
46
aidatatools/ollama-benchmark
LLM Benchmark for Throughput via Ollama (Local LLMs)
46
qcri/LLMeBench
Benchmarking Large Language Models
43
microsoft/LLF-Bench
A benchmark for evaluating learning agents based on just language feedback
38