optimum and optimum-intel

Optimum Intel is a specialized backend/extension within the broader Optimum ecosystem that provides Intel-specific optimization implementations (like OpenVINO and Neural Engine support) for the general-purpose Optimum library, making them complements designed to be used together.

optimum
90
Verified
optimum-intel
64
Established
Maintenance 16/25
Adoption 25/25
Maturity 25/25
Community 24/25
Maintenance 20/25
Adoption 10/25
Maturity 9/25
Community 25/25
Stars: 3,325
Forks: 624
Downloads: 1,613,657
Commits (30d): 4
Language: Python
License: Apache-2.0
Stars: 548
Forks: 205
Downloads:
Commits (30d): 14
Language: Jupyter Notebook
License: Apache-2.0
No risk flags
No Package No Dependents

About optimum

huggingface/optimum

🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools

Supports hardware-specific backends including ONNX Runtime, OpenVINO, TensorRT-LLM, AWS Neuron, and Intel Gaudi through modular installations, enabling optimized inference across diverse accelerators. Provides unified APIs for model export, quantization, and graph optimization while maintaining compatibility with PyTorch, enabling deployment from research to production without refactoring model code.

About optimum-intel

huggingface/optimum-intel

🤗 Optimum Intel: Accelerate inference with Intel optimization tools

Scores updated daily from GitHub, PyPI, and npm data. How scores work