ysskrishna/llm-text-evaluation-framework
Production-ready Streamlit app for LLM response evaluation & benchmarking, scoring outputs across Relevance, Accuracy, Completeness, Coherence, Creativity, Tone, and Intent Alignment. Includes interactive analytics, history tracking, and Docker deployment.
No commits in the last 6 months.
Stars
—
Forks
—
Language
Python
License
MIT
Category
Last pushed
Aug 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ysskrishna/llm-text-evaluation-framework"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
sidphbot/Auto-Research
Generate custom detailed survey paper with topic clustered sections and proper citations, from...
neuml/txtai.py
Python client for txtai
pvhuwung/AIRST-research-paper-summarization
This AI tool app built on Streamlit library provides a powerful and user-friendly tool through...
vk22006/ai-research-synthesis-visualization
AI-powered system that analyzes scientific papers, generates automated literature reviews,...
PhongKiemThu/deep-reading-analyst-skill
📚 Elevate your reading with the Deep Reading Analyst skill, using 10+ frameworks for systematic...