OpenRL-Lab/openrl
Unified Reinforcement Learning Framework
Supports single-agent, multi-agent, offline RL, self-play, and natural language tasks through a unified PyTorch-based interface with automatic environment abstraction. Integrates with Gymnasium, PettingZoo, DeepSpeed, and Hugging Face for model/dataset loading, while providing implementations of PPO, MAPPO, SAC, GAIL, and other algorithms across diverse environments from MuJoCo to StarCraft II. Includes training acceleration via mixed precision, callback hooks for custom logging/early-stopping, and Arena for competitive agent evaluation.
822 stars and 138 monthly downloads. No commits in the last 6 months. Available on PyPI.
Stars
822
Forks
80
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 06, 2024
Monthly downloads
138
Commits (30d)
0
Dependencies
17
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/OpenRL-Lab/openrl"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
hud-evals/hud-python
OSS RL environment + evals toolkit
hiyouga/EasyR1
EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL
opendilab/awesome-RLHF
A curated list of reinforcement learning with human feedback resources (continually updated)
sail-sg/oat
🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning,...
NVlabs/GDPO
Official implementation of GDPO: Group reward-Decoupled Normalization Policy Optimization for...