IAAR-Shanghai/Awesome-Attention-Heads

An awesome repository & A comprehensive survey on interpretability of LLM attention heads.

28
/ 100
Experimental

Provides a curated research platform with a peer-reviewed survey paper accepted by *Patterns* (Cell Press) that organizes attention head studies using a four-stage cognitive framework—Knowledge Recalling, In-Context Identification, Latent Reasoning, and Expression Preparation—to systematize mechanistic interpretability research. Includes structured paper taxonomy with experimental methodology classifications and causal analysis techniques (path patching, attribution heads, mediation analysis) for isolating functional circuits within transformer attention mechanisms across diverse LLM tasks.

400 stars. No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

400

Forks

12

Language

TeX

License

Last pushed

Mar 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/IAAR-Shanghai/Awesome-Attention-Heads"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.