optimum and optimum-transformers

B builds specialized NLP inference pipelines on top of A's optimization framework, making them complements rather than competitors—B leverages Optimum's hardware optimization tools as a dependency to deliver pre-built use cases.

optimum
90
Verified
optimum-transformers
48
Emerging
Maintenance 16/25
Adoption 25/25
Maturity 25/25
Community 24/25
Maintenance 0/25
Adoption 13/25
Maturity 25/25
Community 10/25
Stars: 3,325
Forks: 624
Downloads: 1,613,657
Commits (30d): 4
Language: Python
License: Apache-2.0
Stars: 126
Forks: 8
Downloads: 27
Commits (30d): 0
Language: Python
License: GPL-3.0
No risk flags
Stale 6m

About optimum

huggingface/optimum

🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools

Supports hardware-specific backends including ONNX Runtime, OpenVINO, TensorRT-LLM, AWS Neuron, and Intel Gaudi through modular installations, enabling optimized inference across diverse accelerators. Provides unified APIs for model export, quantization, and graph optimization while maintaining compatibility with PyTorch, enabling deployment from research to production without refactoring model code.

About optimum-transformers

AlekseyKorshuk/optimum-transformers

Accelerated NLP pipelines for fast inference on CPU and GPU. Built with Transformers, Optimum and ONNX Runtime.

Scores updated daily from GitHub, PyPI, and npm data. How scores work