Vitgracer/PyTorch2Cpp-Inference
Tutorial that shows how to train a PyTorch model in Python and run it in C++
This project helps developers train a machine learning model in Python using PyTorch and then deploy it for faster execution in C++. You feed it a trained PyTorch model and an image, and it outputs the predicted digit. This is useful for developers who need to integrate PyTorch models into C++ applications for performance or specific system requirements.
No commits in the last 6 months.
Use this if you are a developer looking for a straightforward example to understand the workflow of training a PyTorch model in Python and performing inference with it in C++.
Not ideal if you need a production-ready solution or are not comfortable working with C++ development environments like CMake and Visual Studio.
Stars
7
Forks
—
Language
Python
License
—
Category
Last pushed
Jul 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Vitgracer/PyTorch2Cpp-Inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX