DeepLearningFlappyBird and RL-FlappyBird

DeepLearningFlappyBird
51
Established
RL-FlappyBird
45
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 20/25
Stars: 6,792
Forks: 2,065
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 82
Forks: 27
Downloads:
Commits (30d): 0
Language: Java
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About DeepLearningFlappyBird

yenchenlin/DeepLearningFlappyBird

Flappy Bird hack using Deep Reinforcement Learning (Deep Q-learning).

Implements a convolutional neural network trained with experience replay and ε-greedy exploration, processing raw 80×80×4 grayscale frame stacks as input to output Q-values for discrete actions. The architecture uses three convolutional layers with max pooling followed by a 256-unit fully connected layer, optimized via Adam on minibatches sampled from a 500k-capacity replay buffer. Built on TensorFlow 0.7 and pygame, with custom preprocessing (background removal, frame stacking) tuned specifically for Flappy Bird's fast action cadence.

About RL-FlappyBird

kingyuluk/RL-FlappyBird

Using reinforcement learning to train FlappyBird.

Scores updated daily from GitHub, PyPI, and npm data. How scores work