mit-ll-ai-technology/llm-sandbox
Large language model evaluation framework for logic and open-ended Q&A with a vareity of RAG and other contextual information sources.
No commits in the last 6 months.
Stars
1
Forks
4
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Oct 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/mit-ll-ai-technology/llm-sandbox"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
amazon-science/auto-rag-eval
Code repo for the ICML 2024 paper "Automated Evaluation of Retrieval-Augmented Language Models...
ibm-self-serve-assets/JudgeIt-LLM-as-a-Judge
Automation Framework using LLM-as-a-judge to evaluate of Agentic AI, RAG, Text2SQL at scale;...
explore-de/rage4j
Evaluate your LLM based Java Apps
nl4opt/ORQA
[AAAI 2025] ORQA is a new QA benchmark designed to assess the reasoning capabilities of LLMs in...