SandAI-org/MagiCompiler
A plug-and-play compiler that delivers free-lunch optimizations for both inference and training.
Leverages whole-graph compilation for inference and FSDP-aware layer-wise compilation for distributed training, capturing optimization opportunities beyond traditional operator fusion. Built on `torch.compile`, it orchestrates system-level dataflow management including selective offloading, prefetching, and automatic activation recomputation. Integrates seamlessly with PyTorch 2.9+ and Transformer-like architectures through minimal decorator additions.
234 stars.
Stars
234
Forks
17
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 28, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/SandAI-org/MagiCompiler"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ggml-org/ggml
Tensor library for machine learning
onnx/ir-py
Efficient in-memory representation for ONNX, in Python
bytedance/lightseq
LightSeq: A High Performance Library for Sequence Processing and Generation
R-D-BioTech-Alaska/Qelm
Qelm - Quantum Enhanced Language Model
kekzl/imp
High-performance LLM inference engine in C++/CUDA for NVIDIA Blackwell GPUs (RTX 5090)