stable-baselines3 and stable-baselines3-contrib

The contrib package extends the main library with experimental RL algorithms and features, making them complements designed to be used together rather than alternatives.

stable-baselines3
76
Verified
stable-baselines3-contrib
64
Established
Maintenance 13/25
Adoption 15/25
Maturity 25/25
Community 23/25
Maintenance 13/25
Adoption 10/25
Maturity 16/25
Community 25/25
Stars: 12,878
Forks: 2,081
Downloads:
Commits (30d): 3
Language: Python
License: MIT
Stars: 693
Forks: 232
Downloads:
Commits (30d): 5
Language: Python
License: MIT
No risk flags
No Package No Dependents

About stable-baselines3

DLR-RM/stable-baselines3

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

Implements canonical on-policy (PPO, A2C) and off-policy (DQN, SAC, TD3) algorithms with a unified sklearn-like API, supporting Dict observation spaces and custom policies via modular network architectures. Integrates with Gymnasium for environment interaction, TensorBoard for experiment tracking, and Weights & Biases/Hugging Face for model management and sharing. Includes companion tools like RL Zoo for hyperparameter tuning and SB3-Contrib for experimental features (masked action support, recurrent policies).

About stable-baselines3-contrib

Stable-Baselines-Team/stable-baselines3-contrib

Contrib package for Stable-Baselines3 - Experimental reinforcement learning (RL) code

Scores updated daily from GitHub, PyPI, and npm data. How scores work