lucasbrianpiveta/Hetu-DiT
🚀 Optimize your Diffusion Transformers with Hetu-DiT, a dynamic parallel serving system that reduces latency and enhances GPU utilization.
Stars
1
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lucasbrianpiveta/Hetu-DiT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
deepspeedai/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference...
horovod/horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
helmholtz-analytics/heat
Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python
bsc-wdc/dislib
The Distributed Computing library for python implemented using PyCOMPSs programming model for HPC.
xorbitsai/xorbits
Scalable Python DS & ML, in an API compatible & lightning fast way.