microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Supports inference across diverse ML frameworks (PyTorch, TensorFlow, scikit-learn, XGBoost, LightGBM) through the ONNX standard, applying graph optimizations and hardware acceleration (CPUs, GPUs, NPUs) for optimal performance. Training acceleration targets PyTorch transformer models on multi-GPU setups with minimal code changes. Operates as a portable runtime layer abstracting hardware and framework differences across Windows, Linux, and macOS.
19,534 stars and 76,261,123 monthly downloads. Used by 150 other packages. Actively maintained with 160 commits in the last 30 days. Available on PyPI and npm.
Stars
19,534
Forks
3,759
Language
C++
License
MIT
Category
Last pushed
Mar 13, 2026
Monthly downloads
76,261,123
Commits (30d)
160
Dependencies
6
Reverse dependents
150
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/microsoft/onnxruntime"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
onnx/onnx
Open standard for machine learning interoperability
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
microsoft/onnxconverter-common
Common utilities for ONNX converters
NVIDIA/DALI
A GPU-accelerated library containing highly optimized building blocks and an execution engine...