dreamerv2 and dreamerv3

DreamerV3 is the successor to DreamerV2, extending discrete world models to diverse continuous control domains beyond Atari, making V2 largely superseded for new projects though both remain available implementations of the same algorithmic lineage.

dreamerv2
60
Established
dreamerv3
60
Established
Maintenance 0/25
Adoption 10/25
Maturity 25/25
Community 25/25
Maintenance 2/25
Adoption 10/25
Maturity 25/25
Community 23/25
Stars: 1,012
Forks: 210
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 2,917
Forks: 484
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stale 6m No Dependents
Stale 6m No Dependents

About dreamerv2

danijar/dreamerv2

Mastering Atari with Discrete World Models

This project helps reinforcement learning researchers and practitioners train agents that can master complex tasks, particularly in simulated environments like Atari games or robotic control. You provide the environment's visual observations, and it outputs a highly skilled agent capable of achieving human-level or better performance. It's designed for those developing or evaluating advanced AI agents.

reinforcement-learning game-AI robotics-simulation AI-research agent-training

About dreamerv3

danijar/dreamerv3

Mastering Diverse Domains through World Models

This project offers a reinforcement learning algorithm that helps train AI agents to master a wide array of complex control tasks, from playing games to robot navigation. You provide data from various simulated or real-world interactions, and the system outputs a highly optimized policy for the agent's behavior. This is ideal for AI researchers and engineers working on autonomous systems or generalized AI.

reinforcement-learning robotics game-AI autonomous-systems AI-research

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work