kaushalshetty/Structured-Self-Attention

A Structured Self-attentive Sentence Embedding

50
/ 100
Established

Implements multi-hop self-attention with Frobenius norm regularization to generate sentence embeddings for classification tasks, supporting both binary and multiclass problems on IMDB and Reuters datasets. The architecture enables visualization of attention weights across sentence tokens via interactive heatmaps, with configurable parameters for attention hops, gradient clipping, and optional GloVe word embeddings. Achieves 90.2% accuracy on test sets while providing interpretability through attention weight analysis.

494 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 24 / 25

How are scores calculated?

Stars

494

Forks

103

Language

Python

License

MIT

Last pushed

Sep 22, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/kaushalshetty/Structured-Self-Attention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.