OpenRL-Lab/openrl

Unified Reinforcement Learning Framework

51
/ 100
Established

Supports single-agent, multi-agent, offline RL, self-play, and natural language tasks through a unified PyTorch-based interface with automatic environment abstraction. Integrates with Gymnasium, PettingZoo, DeepSpeed, and Hugging Face for model/dataset loading, while providing implementations of PPO, MAPPO, SAC, GAIL, and other algorithms across diverse environments from MuJoCo to StarCraft II. Includes training acceleration via mixed precision, callback hooks for custom logging/early-stopping, and Arena for competitive agent evaluation.

822 stars and 138 monthly downloads. No commits in the last 6 months. Available on PyPI.

Stale 6m
Maintenance 0 / 25
Adoption 15 / 25
Maturity 18 / 25
Community 18 / 25

How are scores calculated?

Stars

822

Forks

80

Language

Python

License

Apache-2.0

Last pushed

Sep 06, 2024

Monthly downloads

138

Commits (30d)

0

Dependencies

17

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/OpenRL-Lab/openrl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.