Amir-Mohseni/AI-Response-Evaluation
A comprehensive framework to evaluate the quality of AI-generated responses, comparing different models (GPT and Gemini) based on relevance, completeness, and helpfulness using predefined prompts and automated scoring.
No commits in the last 6 months.
Stars
3
Forks
1
Language
Jupyter Notebook
License
—
Category
Last pushed
Jun 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Amir-Mohseni/AI-Response-Evaluation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
open-compass/opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral,...
IBM/unitxt
🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the...
lean-dojo/LeanDojo
Tool for data extraction and interacting with Lean programmatically.
GoodStartLabs/AI_Diplomacy
Frontier Models playing the board game Diplomacy.
MigoXLab/LMeterX
A general-purpose API load testing platform that supports LLM services and business HTTP...