x-transformers and Fast-Transformer

These are ecosystem siblings where x-transformers provides a general-purpose transformer implementation framework, while Fast-Transformer offers a specialized alternative attention mechanism (additive attention) that could be integrated into or compared against x-transformers' modular architecture.

x-transformers
79
Verified
Fast-Transformer
51
Established
Maintenance 20/25
Adoption 15/25
Maturity 25/25
Community 19/25
Maintenance 0/25
Adoption 10/25
Maturity 25/25
Community 16/25
Stars: 5,808
Forks: 507
Downloads:
Commits (30d): 9
Language: Python
License: MIT
Stars: 148
Forks: 22
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: Apache-2.0
No risk flags
Stale 6m

About x-transformers

lucidrains/x-transformers

A concise but complete full-attention transformer with a set of promising experimental features from various papers

Supports encoder-decoder, decoder-only (GPT), and encoder-only (BERT) architectures alongside vision transformers for image classification and multimodal tasks like image captioning and vision-language modeling. Implements experimental attention mechanisms including Flash Attention for memory-efficient training, persistent memory augmentation, and memory tokens, while offering fine-grained control over dropout strategies including stochastic depth and layer-wise dropout. Built as a PyTorch library with modular components (`TransformerWrapper`, `Encoder`, `Decoder`, `ViTransformerWrapper`) enabling flexible composition for tasks ranging from language modeling to vision-language understanding.

About Fast-Transformer

Rishit-dagli/Fast-Transformer

An implementation of Additive Attention

Scores updated daily from GitHub, PyPI, and npm data. How scores work