Multimodal-Emotion-Recognition and MULTIMODAL-EMOTION-RECOGNITION

These are competitors offering similar multimodal emotion recognition implementations, with A providing a web application interface while B focuses on dataset-based model training, making them alternative approaches to the same problem rather than tools designed to work together.

Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 20/25
Stars: 1,072
Forks: 318
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: Apache-2.0
Stars: 110
Forks: 26
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: GPL-3.0
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About Multimodal-Emotion-Recognition

maelfabien/Multimodal-Emotion-Recognition

A real time Multimodal Emotion Recognition web app for text, sound and video inputs

Combines separate deep learning models for facial expressions (FER2013), speech prosody (RAVDESS), and text sentiment (Stream-of-consciousness dataset) into an ensemble classifier that fuses predictions across modalities. Deployed as a Flask web application enabling real-time inference on job interview recordings, with pre-trained weights and processed datasets publicly available for reproducibility.

About MULTIMODAL-EMOTION-RECOGNITION

ankurbhatia24/MULTIMODAL-EMOTION-RECOGNITION

Human Emotion Understanding using multimodal dataset.

Scores updated daily from GitHub, PyPI, and npm data. How scores work