alexandrughinea/lm-tiny-prompt-evaluation-framework
This project provides a tiny framework for testing different prompt versions with various AI models. It includes tools for evaluating the performance of different prompt-model combinations, correlating results, and visualizing the analysis.
No commits in the last 6 months.
Stars
1
Forks
—
Language
JavaScript
License
—
Category
Last pushed
Sep 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/alexandrughinea/lm-tiny-prompt-evaluation-framework"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ExpertiseModel/MuTAP
MutAP: A prompt_based learning technique to automatically generate test cases with Large Language Model
INPVLSA/probefish
A web-based LLM prompt and endpoint testing platform. Organize, version, test, and validate...
thabit-ai/thabit
Thabit is platform to evaluate prompts on multiple LLMs to determine the best one for your data
nicolay-r/llm-prompt-checking
Toolset for checking differences in recognising semantic relation presence by: (1) large...