lucidrains/x-transformers

A concise but complete full-attention transformer with a set of promising experimental features from various papers

79
/ 100
Verified

Supports encoder-decoder, decoder-only (GPT), and encoder-only (BERT) architectures alongside vision transformers for image classification and multimodal tasks like image captioning and vision-language modeling. Implements experimental attention mechanisms including Flash Attention for memory-efficient training, persistent memory augmentation, and memory tokens, while offering fine-grained control over dropout strategies including stochastic depth and layer-wise dropout. Built as a PyTorch library with modular components (`TransformerWrapper`, `Encoder`, `Decoder`, `ViTransformerWrapper`) enabling flexible composition for tasks ranging from language modeling to vision-language understanding.

5,808 stars. Used by 6 other packages. Actively maintained with 9 commits in the last 30 days. Available on PyPI.

Maintenance 20 / 25
Adoption 15 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

5,808

Forks

507

Language

Python

License

MIT

Last pushed

Mar 27, 2026

Commits (30d)

9

Dependencies

8

Reverse dependents

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/lucidrains/x-transformers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.