Sign-To-Speech-Conversion and Sign-Language-Translator

Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 24/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 19/25
Stars: 137
Forks: 101
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 90
Forks: 21
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About Sign-To-Speech-Conversion

beingaryan/Sign-To-Speech-Conversion

Sign Language Detection system based on computer vision and deep learning using OpenCV and Tensorflow/Keras frameworks.

This project helps people communicate using American Sign Language (ASL) by converting their hand gestures into spoken words in real-time. It takes live video of ASL signs as input and outputs audible speech. This tool is designed for individuals who use ASL and want to communicate with hearing people, as well as for those who interact with ASL users.

ASL communication accessibility speech generation video interpretation inclusive communication

About Sign-Language-Translator

dgovor/Sign-Language-Translator

Neural Network that is able to translate any sign language into text.

This project helps individuals create a customized translation tool that turns sign language gestures into written text. You provide video recordings of your specific hand signs, and the system learns to recognize them, outputting real-time predictions of signed sentences, complete with grammar correction. It's designed for anyone who needs to translate a particular sign language into text, especially for improving communication.

sign-language-interpretation assistive-communication human-computer-interaction custom-gesture-recognition

Scores updated daily from GitHub, PyPI, and npm data. How scores work