apache/tvm
Open Machine Learning Compiler Framework
This framework helps machine learning engineers and researchers take a trained machine learning model and optimize it to run efficiently on various hardware, from powerful GPUs to tiny embedded devices. It takes your existing model definition and outputs a highly optimized, ready-to-deploy module that performs inference quickly and uses minimal resources. This is for professionals building and deploying machine learning applications who need fine-grained control over performance.
13,183 stars. Actively maintained with 67 commits in the last 30 days.
Use this if you need to deploy machine learning models to a wide range of hardware, from cloud servers to edge devices, and require maximum performance and efficiency from your trained models.
Not ideal if you are a data scientist primarily focused on model training and experimentation and do not need to optimize models for specific hardware deployment.
Stars
13,183
Forks
3,812
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
67
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/apache/tvm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
OpenMined/TenSEAL
A library for doing homomorphic encryption operations on tensors
iree-org/iree-turbine
IREE's PyTorch Frontend, based on Torch Dynamo.
uxlfoundation/oneDNN
oneAPI Deep Neural Network Library (oneDNN)
Tencent/ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
VeriSilicon/TIM-VX
VeriSilicon Tensor Interface Module