yonahgraphics/openevalkit
Production-grade Python framework for evaluating LLM and agentic systems with traditional scorers, LLM judges (OpenAI, Anthropic, Ollama, 100+ models via LiteLLM), ensemble aggregation, and smart caching for cost-effective testing.
Available on PyPI.
Stars
3
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 02, 2026
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/yonahgraphics/openevalkit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
radlab-dev-group/llm-router
LLM Router is a service that can be deployed on‑premises or in the cloud. It adds a layer...
Aryan-202/cookbooks
An intelligent optimization engine that dynamically adjusts LLM selection, context size, and...
squishai/squish
🤖🗜️⚡️ Compress local LLMs once, run them forever at sub-second load times. OpenAI + Ollama...
wesleyscholl/squish
🤖🗜️⚡️ Compress local LLMs once, run them forever at sub-second load times. OpenAI + Ollama...
Yu-amd/Multiverse
Lightweight model inference playground