QinnniQ/infernomics
LLM inference cost–latency–quality optimization dashboard with RAG + LLM-as-judge evaluation
11
/ 100
Experimental
No License
No Package
No Dependents
Maintenance
10 / 25
Adoption
0 / 25
Maturity
1 / 25
Community
0 / 25
Stars
—
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 22, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/QinnniQ/infernomics"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
modelscope/evalscope
A streamlined and customizable framework for efficient large model (LLM, VLM, AIGC) evaluation...
90
Kareem-Rashed/rubric-eval
Independent framework to test, benchmark, and evaluate LLMs & AI agents locally.
52
izam-mohammed/ragrank
🎯 Your free LLM evaluation toolkit helps you assess the accuracy of facts, how well it...
52
justplus/llm-eval
大语言模型评估平台,支持多种评估基准、自定义数据集和性能测试。支持基于自定义数据集的RAG评估。
45
relari-ai/continuous-eval
Data-Driven Evaluation for LLM-Powered Applications
41