mit-ll-ai-technology/llm-sandbox
Large language model evaluation framework for logic and open-ended Q&A with a vareity of RAG and other contextual information sources.
No commits in the last 6 months.
Stars
1
Forks
4
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Oct 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mit-ll-ai-technology/llm-sandbox"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
evalplus/evalplus
Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents
EuroEval/EuroEval
The robust European language model benchmark.