psunlpgroup/ReaLMistake
This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".
No commits in the last 6 months.
Stars
31
Forks
3
Language
Python
License
—
Category
Last pushed
Aug 18, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/psunlpgroup/ReaLMistake"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/langfun
OO for LLMs
tanaos/artifex
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
vulnerability-lookup/VulnTrain
A tool to generate datasets and models based on vulnerabilities descriptions from @Vulnerability-Lookup.
DataScienceUIBK/HintEval
HintEvalš”: A Comprehensive Framework for Hint Generation and Evaluation for Questions
microsoft/LMChallenge
A library & tools to evaluate predictive language models.