yenchenlin/DeepLearningFlappyBird
Flappy Bird hack using Deep Reinforcement Learning (Deep Q-learning).
Implements a convolutional neural network trained with experience replay and ε-greedy exploration, processing raw 80×80×4 grayscale frame stacks as input to output Q-values for discrete actions. The architecture uses three convolutional layers with max pooling followed by a 256-unit fully connected layer, optimized via Adam on minibatches sampled from a 500k-capacity replay buffer. Built on TensorFlow 0.7 and pygame, with custom preprocessing (background removal, frame stacking) tuned specifically for Flappy Bird's fast action cadence.
6,792 stars. No commits in the last 6 months.
Stars
6,792
Forks
2,065
Language
Python
License
MIT
Category
Last pushed
Aug 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/yenchenlin/DeepLearningFlappyBird"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
ChenglongChen/pytorch-DRL
PyTorch implementations of various Deep Reinforcement Learning (DRL) algorithms for both single...
vladfi1/phillip
The SSBM "Phillip" AI.
nikp06/subwAI
Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine...
kingyuluk/RL-FlappyBird
Using reinforcement learning to train FlappyBird.
vita-epfl/social-nce
[ICCV] Social NCE: Contrastive Learning of Socially-aware Motion Representations