cloudguruab/modsysML
Human reinforcement learning (RLHF) framework for AI models. Evaluate and compare LLM outputs, test quality, catch regressions and automate.
Stars
36
Forks
5
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/cloudguruab/modsysML"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
allenai/RL4LMs
A modular RL library to fine-tune language models to human preferences
modal-labs/stopwatch
A tool for benchmarking LLMs on Modal
Mya-Mya/CBF-LLM
"CBF-LLM: Safe Control for LLM Alignment"
Adora-Foundation/llm-energy-lab
Web application for benchmarking and comparing LLM behaviour, energy and emissions on cloud and...
mrconter1/PullRequestBenchmark
Evaluating LLMs performance in PR reviews as an indicator for their capability in creating PRs.