onnxruntime and onnx-tensorrt

ONNX Runtime is a general-purpose inference engine that supports multiple backends including TensorRT, while ONNX-TensorRT is specifically the TensorRT plugin/backend that enables ONNX Runtime to leverage NVIDIA's optimized inference engine—making them complements that work together rather than alternatives.

onnxruntime
100
Verified
onnx-tensorrt
63
Established
Maintenance 25/25
Adoption 25/25
Maturity 25/25
Community 25/25
Maintenance 13/25
Adoption 10/25
Maturity 16/25
Community 24/25
Stars: 19,534
Forks: 3,759
Downloads: 76,261,123
Commits (30d): 160
Language: C++
License: MIT
Stars: 3,194
Forks: 547
Downloads:
Commits (30d): 1
Language: C++
License: Apache-2.0
No risk flags
No Package No Dependents

About onnxruntime

microsoft/onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Supports inference across diverse ML frameworks (PyTorch, TensorFlow, scikit-learn, XGBoost, LightGBM) through the ONNX standard, applying graph optimizations and hardware acceleration (CPUs, GPUs, NPUs) for optimal performance. Training acceleration targets PyTorch transformer models on multi-GPU setups with minimal code changes. Operates as a portable runtime layer abstracting hardware and framework differences across Windows, Linux, and macOS.

About onnx-tensorrt

onnx/onnx-tensorrt

ONNX-TensorRT: TensorRT backend for ONNX

Scores updated daily from GitHub, PyPI, and npm data. How scores work