apache/tvm

Open Machine Learning Compiler Framework

76
/ 100
Verified

This framework helps machine learning engineers and researchers take a trained machine learning model and optimize it to run efficiently on various hardware, from powerful GPUs to tiny embedded devices. It takes your existing model definition and outputs a highly optimized, ready-to-deploy module that performs inference quickly and uses minimal resources. This is for professionals building and deploying machine learning applications who need fine-grained control over performance.

13,183 stars. Actively maintained with 67 commits in the last 30 days.

Use this if you need to deploy machine learning models to a wide range of hardware, from cloud servers to edge devices, and require maximum performance and efficiency from your trained models.

Not ideal if you are a data scientist primarily focused on model training and experimentation and do not need to optimize models for specific hardware deployment.

machine-learning-deployment model-optimization edge-ai compiler-engineering AI-infrastructure
No Package No Dependents
Maintenance 25 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

13,183

Forks

3,812

Language

Python

License

Apache-2.0

Last pushed

Mar 13, 2026

Commits (30d)

67

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/apache/tvm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

Compare