Multimodal-Emotion-Recognition and MULTIMODAL-EMOTION-RECOGNITION
These are competitors offering similar multimodal emotion recognition implementations, with A providing a web application interface while B focuses on dataset-based model training, making them alternative approaches to the same problem rather than tools designed to work together.
About Multimodal-Emotion-Recognition
maelfabien/Multimodal-Emotion-Recognition
A real time Multimodal Emotion Recognition web app for text, sound and video inputs
Combines separate deep learning models for facial expressions (FER2013), speech prosody (RAVDESS), and text sentiment (Stream-of-consciousness dataset) into an ensemble classifier that fuses predictions across modalities. Deployed as a Flask web application enabling real-time inference on job interview recordings, with pre-trained weights and processed datasets publicly available for reproducibility.
About MULTIMODAL-EMOTION-RECOGNITION
ankurbhatia24/MULTIMODAL-EMOTION-RECOGNITION
Human Emotion Understanding using multimodal dataset.
Scores updated daily from GitHub, PyPI, and npm data. How scores work