fairseq2 and fairseq
Fairseq2 is the modernized successor to the original fairseq, designed to replace it with improved architecture and performance while maintaining conceptual compatibility with its predecessor's sequence-to-sequence modeling framework.
About fairseq2
facebookresearch/fairseq2
FAIR Sequence Modeling Toolkit 2
About fairseq
facebookresearch/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Built on PyTorch, fairseq implements diverse sequence modeling architectures—Transformers, CNNs, LSTMs, and non-autoregressive variants—with modular components for efficient distributed training via fully-sharded data parallelism. Beyond text generation, it extends to speech processing (wav2vec, speech-to-speech translation) and multimodal tasks (VideoCLIP), using Hydra for reproducible configuration management and integrating with xFormers for optimized attention mechanisms.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work