soujanyaporia/multimodal-sentiment-analysis

Attention-based multimodal fusion for sentiment analysis

49
/ 100
Emerging

Implements hierarchical LSTM networks with multi-level attention mechanisms for fusion of text, audio, and visual modalities across utterance and video levels. Evaluates on MOSI, MOSEI, and IEMOCAP datasets with configurable unimodal pre-training and concatenation-based or attention-based fusion strategies. Supports variable classification tasks (2/3/6-way sentiment) and speaker-independent train/test splits on padded utterance sequences.

366 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

366

Forks

74

Language

Python

License

MIT

Last pushed

Apr 08, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/soujanyaporia/multimodal-sentiment-analysis"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.