facebookresearch/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Built on PyTorch, fairseq implements diverse sequence modeling architectures—Transformers, CNNs, LSTMs, and non-autoregressive variants—with modular components for efficient distributed training via fully-sharded data parallelism. Beyond text generation, it extends to speech processing (wav2vec, speech-to-speech translation) and multimodal tasks (VideoCLIP), using Hydra for reproducible configuration management and integrating with xFormers for optimized attention mechanisms.
32,190 stars. No commits in the last 6 months.
Stars
32,190
Forks
6,676
Language
Python
License
MIT
Category
Last pushed
Sep 30, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/facebookresearch/fairseq"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
facebookresearch/fairseq2
FAIR Sequence Modeling Toolkit 2
OpenNMT/OpenNMT-tf
Neural machine translation and sequence learning using TensorFlow
lhotse-speech/lhotse
Tools for handling multimodal data in machine learning projects.
awslabs/sockeye
Sequence-to-sequence framework with a focus on Neural Machine Translation based on PyTorch
google/sequence-layers
A neural network layer API and library for sequence modeling, designed for easy creation of...