soujanyaporia/multimodal-sentiment-analysis
Attention-based multimodal fusion for sentiment analysis
Implements hierarchical LSTM networks with multi-level attention mechanisms for fusion of text, audio, and visual modalities across utterance and video levels. Evaluates on MOSI, MOSEI, and IEMOCAP datasets with configurable unimodal pre-training and concatenation-based or attention-based fusion strategies. Supports variable classification tasks (2/3/6-way sentiment) and speaker-independent train/test splits on padded utterance sequences.
366 stars. No commits in the last 6 months.
Stars
366
Forks
74
Language
Python
License
MIT
Category
Last pushed
Apr 08, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/soujanyaporia/multimodal-sentiment-analysis"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Cyberbolt/Cemotion
A Chinese NLP library based on BERT for sentiment analysis and general-purpose Chinese word...
firojalam/multimodal_social_media
multimodal social media content (text, image) classification
ahmedbesbes/multi-label-sentiment-classifier
How to build a multi-label sentiment classifiers with Tez and PyTorch
juliusberner/emotion_transformer
Contextual Emotion Detection in Text (DoubleDistilBert Model)
faezesarlakifar/text-emotion-recognition
Persian text emotion recognition by fine tuning the XLM-RoBERTa Model + Bidirectional GRU layer.