onnx and onnx-tensorrt
ONNX-TensorRT is a backend implementation that enables ONNX models to be executed on NVIDIA TensorRT, making them complements that are used together for optimized inference on NVIDIA hardware.
Maintenance
23/25
Adoption
25/25
Maturity
25/25
Community
25/25
Maintenance
13/25
Adoption
10/25
Maturity
16/25
Community
24/25
Stars: 20,477
Forks: 3,896
Downloads: 16,425,577
Commits (30d): 42
Language: Python
License: Apache-2.0
Stars: 3,194
Forks: 547
Downloads: —
Commits (30d): 1
Language: C++
License: Apache-2.0
No risk flags
No Package
No Dependents
About onnx
onnx/onnx
Open standard for machine learning interoperability
Defines an extensible computation graph IR with built-in operators and standard data types, enabling model serialization and inference across PyTorch, TensorFlow, scikit-learn, and other frameworks. Provides shape/type inference, graph optimization, and opset version conversion utilities for seamless model portability from research to production deployment.
About onnx-tensorrt
onnx/onnx-tensorrt
ONNX-TensorRT: TensorRT backend for ONNX
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work