tomaarsen/attention_sinks
Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
736 stars and 53 monthly downloads. No commits in the last 6 months. Available on PyPI.
Stars
736
Forks
45
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 10, 2024
Monthly downloads
53
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tomaarsen/attention_sinks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/locoformer
LocoFormer - Generalist Locomotion via Long-Context Adaptation