adityakamat24/triton-fast-mha
A high-performance kernel implementation of multi-head attention using Triton. Focused on minimizing memory overhead and maximizing throughput for large-scale transformer layers. Includes clean-tensor layouts, head-grouping optimisations, and ready-to-benchmark code you can plug into custom models.
No commits in the last 6 months.
Stars
—
Forks
—
Language
Python
License
MIT
Category
Last pushed
Aug 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/adityakamat24/triton-fast-mha"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/locoformer
LocoFormer - Generalist Locomotion via Long-Context Adaptation