robflynnyh/hydra-linear-attention

Implementation of: Hydra Attention: Efficient Attention with Many Heads (https://arxiv.org/abs/2209.07484)

13
/ 100
Experimental

This is a library for researchers and practitioners working with machine learning models, specifically those involving 'attention' mechanisms. It provides a more efficient way to process complex data by using a specialized form of attention, leading to faster computations in models. You would use this if you're building or optimizing deep learning models that currently use standard attention layers and need to improve their speed.

No commits in the last 6 months.

Use this if you are a deep learning engineer or researcher looking to improve the computational efficiency of attention mechanisms in your neural networks.

Not ideal if you are not working directly with deep learning model architecture or do not have performance bottlenecks related to attention layers.

deep-learning neural-networks model-optimization computational-efficiency machine-learning-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

14

Forks

Language

Python

License

Last pushed

Jan 08, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/robflynnyh/hydra-linear-attention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.