tract and tensorrt-infer
These tools are competitors, as both `tract` and `tensorrt-infer` provide Rust interfaces for performing ONNX inference, but using different underlying inference engines (`tract`'s own engine or Tensorflow, and NVIDIA TensorRT, respectively).
About tract
sonos/tract
Tiny, no-nonsense, self-contained, Tensorflow and ONNX inference
Implements graph-level optimization passes (constant folding, operator fusion, quantization-aware transformations) and supports symbolic dimensions for dynamic shapes, enabling efficient inference on resource-constrained embedded systems. Built in Rust with zero external dependencies, it provides both a standalone CLI and language bindings (Python, C) for framework integration. Handles ONNX (85%+ operator coverage), TensorFlow 1.x, and NNEF formats with a production-focused subset philosophy that excludes rarely-used features like tensor sequences in favor of maintainability and performance.
About tensorrt-infer
LdDl/tensorrt-infer
Rust wrapper for NVIDIA TensorRT inference.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work