rahatmoktadir03/llm-evaluation-platform
A full-stack web application for comparing and analyzing the performance of large language models (LLMs). Features include side-by-side prompt evaluation, performance metrics visualization, and an analytics dashboard. Built with React, Tailwind CSS, Node.js, and MongoDB."
No commits in the last 6 months.
Stars
1
Forks
—
Language
TypeScript
License
MIT
Category
Last pushed
Jan 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/rahatmoktadir03/llm-evaluation-platform"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
langfuse/langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management,...
Arize-ai/phoenix
AI Observability & Evaluation
Mirascope/mirascope
The LLM Anti-Framework
Helicone/helicone
🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓
Agenta-AI/agenta
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM...