pytorch/executorch
On-device AI across mobile, embedded and edge for PyTorch
Compiles PyTorch models to a lightweight `.pte` binary format via ahead-of-time (AOT) compilation, enabling a 50KB runtime across microcontrollers to smartphones. Supports 12+ hardware accelerators (Apple Neural Engine, Qualcomm QNN, ARM, Vulkan) with automatic delegation and CPU fallback, allowing single-export multi-target deployment. Integrates directly with PyTorch's `torch.export()` API and includes specialized runners for LLMs, vision, and multimodal models on iOS, Android, and embedded systems without format conversion or vendor lock-in.
4,374 stars and 133,251 monthly downloads. Actively maintained with 397 commits in the last 30 days. Available on PyPI.
Stars
4,374
Forks
870
Language
Python
License
—
Category
Last pushed
Mar 13, 2026
Monthly downloads
133,251
Commits (30d)
397
Dependencies
25
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/pytorch/executorch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
catalyst-team/catalyst
Accelerated deep learning R&D
mit-han-lab/mcunet
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2:...
z-mahmud22/Dlib_Windows_Python3.x
Dlib compiled binaries (.whl) for Python 3.7-3.14 and Windows x64
gigwegbe/tinyml-papers-and-projects
This is a list of interesting papers and projects about TinyML.
ai-techsystems/deepC
vendor independent TinyML deep learning library, compiler and inference framework microcomputers...