alpha-zero-general and alphazero-general

The projects are competitors; one is a well-established, feature-rich implementation of AlphaZero with extensive tutorials, while the other is a more recent, less popular, but potentially faster PyTorch-based alternative.

alpha-zero-general
51
Established
alphazero-general
46
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 21/25
Stars: 4,388
Forks: 1,147
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 87
Forks: 34
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About alpha-zero-general

suragnair/alpha-zero-general

A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more

Implements self-play reinforcement learning via Monte Carlo Tree Search (MCTS) combined with neural network training in a modular architecture where games and frameworks are pluggable through subclassing `Game.py` and `NeuralNet.py`. The core training loop (`Coach.py`) alternates between self-play episodes guided by MCTS and neural network optimization, supporting PyTorch and Keras backends with configurable hyperparameters for simulation depth, batch size, and learning rates. Includes pretrained models and enables direct evaluation against baseline opponents through the pit interface.

About alphazero-general

kevaday/alphazero-general

A fast, generalized, and modified implementation of Deepmind's distinguished AlphaZero in PyTorch.

Scores updated daily from GitHub, PyPI, and npm data. How scores work