DanielSc4/RewardLM
Reward a Language Model with pancakes 🥞
This project helps machine learning engineers or researchers working with large language models to refine their models' behavior. It takes a pre-trained generative language model and specific datasets as input, allowing for fine-tuning or reinforcement learning to guide the model towards desired outputs. Additionally, it can assess the toxicity levels of the model's generated responses, providing metrics to understand and improve safety.
No commits in the last 6 months.
Use this if you need to adapt a generative language model to perform specific tasks or adhere to certain content guidelines, without extensive human feedback loops, and want to measure its output toxicity.
Not ideal if you are looking for a pre-packaged, ready-to-deploy solution for end-users, or if you don't have experience with language model training workflows.
Stars
12
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Sep 28, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DanielSc4/RewardLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.