kaushalshetty/Structured-Self-Attention
A Structured Self-attentive Sentence Embedding
Implements multi-hop self-attention with Frobenius norm regularization to generate sentence embeddings for classification tasks, supporting both binary and multiclass problems on IMDB and Reuters datasets. The architecture enables visualization of attention weights across sentence tokens via interactive heatmaps, with configurable parameters for attention hops, gradient clipping, and optional GloVe word embeddings. Achieves 90.2% accuracy on test sets while providing interpretability through attention weight analysis.
494 stars. No commits in the last 6 months.
Stars
494
Forks
103
Language
Python
License
MIT
Category
Last pushed
Sep 22, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/kaushalshetty/Structured-Self-Attention"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.