onnxruntime and onnx

ONNX Runtime is the execution engine that deploys and runs models in the standard format defined by ONNX, making them complements that are typically used together in a deployment pipeline.

onnxruntime
100
Verified
onnx
98
Verified
Maintenance 25/25
Adoption 25/25
Maturity 25/25
Community 25/25
Maintenance 23/25
Adoption 25/25
Maturity 25/25
Community 25/25
Stars: 19,534
Forks: 3,759
Downloads: 76,261,123
Commits (30d): 160
Language: C++
License: MIT
Stars: 20,477
Forks: 3,896
Downloads: 16,425,577
Commits (30d): 42
Language: Python
License: Apache-2.0
No risk flags
No risk flags

About onnxruntime

microsoft/onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Supports inference across diverse ML frameworks (PyTorch, TensorFlow, scikit-learn, XGBoost, LightGBM) through the ONNX standard, applying graph optimizations and hardware acceleration (CPUs, GPUs, NPUs) for optimal performance. Training acceleration targets PyTorch transformer models on multi-GPU setups with minimal code changes. Operates as a portable runtime layer abstracting hardware and framework differences across Windows, Linux, and macOS.

About onnx

onnx/onnx

Open standard for machine learning interoperability

Defines an extensible computation graph IR with built-in operators and standard data types, enabling model serialization and inference across PyTorch, TensorFlow, scikit-learn, and other frameworks. Provides shape/type inference, graph optimization, and opset version conversion utilities for seamless model portability from research to production deployment.

Scores updated daily from GitHub, PyPI, and npm data. How scores work