hmohebbi/SentimentAnalysis

(BOW, TF-IDF, Word2Vec, BERT) Word Embeddings + (SVM, Naive Bayes, Decision Tree, Random Forest) Base Classifiers + Pre-trained BERT on Tensorflow Hub + 1-D CNN and Bi-Directional LSTM on IMDB Movie Reviews Dataset

36
/ 100
Emerging

Implements a comparative benchmark across shallow and deep learning approaches, aggregating sentence-level BERT embeddings via mean pooling for traditional classifiers while using TensorFlow Hub's pre-trained BERT directly with bidirectional LSTMs for end-to-end fine-tuning. Evaluates trade-offs between classical ML pipelines (SVM with BERT embeddings achieving 90.35% accuracy) and neural architectures (Bi-LSTM with BERT reaching 91.34%), providing empirical performance metrics across embedding strategies and classifier combinations on IMDB reviews.

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

74

Forks

17

Language

Jupyter Notebook

License

Last pushed

Nov 30, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/hmohebbi/SentimentAnalysis"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.