onnxruntime and onnx
ONNX Runtime is the execution engine that deploys and runs models in the standard format defined by ONNX, making them complements that are typically used together in a deployment pipeline.
About onnxruntime
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Supports inference across diverse ML frameworks (PyTorch, TensorFlow, scikit-learn, XGBoost, LightGBM) through the ONNX standard, applying graph optimizations and hardware acceleration (CPUs, GPUs, NPUs) for optimal performance. Training acceleration targets PyTorch transformer models on multi-GPU setups with minimal code changes. Operates as a portable runtime layer abstracting hardware and framework differences across Windows, Linux, and macOS.
About onnx
onnx/onnx
Open standard for machine learning interoperability
Defines an extensible computation graph IR with built-in operators and standard data types, enabling model serialization and inference across PyTorch, TensorFlow, scikit-learn, and other frameworks. Provides shape/type inference, graph optimization, and opset version conversion utilities for seamless model portability from research to production deployment.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work