SandAI-org/MagiCompiler
A plug-and-play compiler that delivers free-lunch optimizations for both inference and training.
This tool helps machine learning engineers and researchers accelerate the training and deployment of large AI models, particularly those based on Transformer architectures. By optimizing how these models use computational resources, it takes your existing model code and produces a significantly faster version, improving both training speed and inference performance for various applications.
234 stars.
Use this if you are working with large AI models and need to reduce their training time or speed up their response time during operation, especially for multi-modality tasks or memory-constrained environments.
Not ideal if you are working with small models or non-Transformer architectures, or if you do not have control over the underlying Python and PyTorch environment.
Stars
234
Forks
17
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 28, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SandAI-org/MagiCompiler"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ggml-org/ggml
Tensor library for machine learning
quic/efficient-transformers
This library empowers users to seamlessly port pretrained models and checkpoints on the...
ManuelSLemos/RabbitLLM
Run 70B+ LLMs on a single 4GB GPU — no quantization required.
alpa-projects/alpa
Training and serving large-scale neural networks with auto parallelization.
bytedance/lightseq
LightSeq: A High Performance Library for Sequence Processing and Generation