softmax1/Flash-Attention-Softmax-N

CUDA and Triton implementations of Flash Attention with SoftmaxN.

42
/ 100
Emerging

No commits in the last 6 months. Available on PyPI.

Stale 6m
Maintenance 0 / 25
Adoption 9 / 25
Maturity 25 / 25
Community 8 / 25

How are scores calculated?

Stars

73

Forks

5

Language

Python

License

GPL-3.0

Last pushed

May 26, 2024

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/softmax1/Flash-Attention-Softmax-N"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.