alpha-zero-general and alphazero-general
The projects are competitors; one is a well-established, feature-rich implementation of AlphaZero with extensive tutorials, while the other is a more recent, less popular, but potentially faster PyTorch-based alternative.
About alpha-zero-general
suragnair/alpha-zero-general
A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more
Implements self-play reinforcement learning via Monte Carlo Tree Search (MCTS) combined with neural network training in a modular architecture where games and frameworks are pluggable through subclassing `Game.py` and `NeuralNet.py`. The core training loop (`Coach.py`) alternates between self-play episodes guided by MCTS and neural network optimization, supporting PyTorch and Keras backends with configurable hyperparameters for simulation depth, batch size, and learning rates. Includes pretrained models and enables direct evaluation against baseline opponents through the pit interface.
About alphazero-general
kevaday/alphazero-general
A fast, generalized, and modified implementation of Deepmind's distinguished AlphaZero in PyTorch.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work