Sign-Language and Sign-Language-Recognition
These are competitors—both implement CNN-based hand gesture classification for sign language, differing primarily in scope (full ASL gestures vs. finger spelling) and maturity, so a user would select one based on their specific recognition task rather than combining them.
About Sign-Language
EvilPort2/Sign-Language
A very simple CNN project to recognize gestures made in American Sign Language
Implements a complete pipeline for capturing custom ASL gesture datasets (1200 images per gesture at 50x50 grayscale), training CNN models via TensorFlow or Keras with MNIST-like architecture, and performing real-time inference on video streams using skin-color histogram segmentation. Includes utilities for data augmentation (vertical flipping), model evaluation with confusion matrices and precision/recall metrics, and integrates pyttsx3 for audio feedback during gesture recognition.
About Sign-Language-Recognition
CodingSamrat/Sign-Language-Recognition
A Machine Learning model that will be able to classify the various hand gestures used for finger spelling in sign language
Scores updated daily from GitHub, PyPI, and npm data. How scores work