RatnaKaturi/Analyzing-Attention-Head-Specialization-in-Transformer-Language-Models

Performed head-level interpretability analysis on Transformer models using masking experiments. Evaluated attention head contribution through accuracy and logit-based metrics (91% baseline accuracy).

12
/ 100
Experimental
No License No Package No Dependents
Maintenance 10 / 25
Adoption 1 / 25
Maturity 1 / 25
Community 0 / 25

How are scores calculated?

Stars

1

Forks

Language

Jupyter Notebook

License

Last pushed

Feb 21, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/RatnaKaturi/Analyzing-Attention-Head-Specialization-in-Transformer-Language-Models"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.