Youhe-Jiang/IJCAI2023-OptimalShardedDataParallel
[IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any interests, please visit/star/fork https://github.com/Youhe-Jiang/OptimalShardedDataParallel
No commits in the last 6 months.
Stars
52
Forks
5
Language
Python
License
MIT
Category
Last pushed
May 31, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Youhe-Jiang/IJCAI2023-OptimalShardedDataParallel"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
deepspeedai/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference...
horovod/horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
helmholtz-analytics/heat
Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python
bsc-wdc/dislib
The Distributed Computing library for python implemented using PyCOMPSs programming model for HPC.
google/sedpack
Sedpack - Scalable and efficient data packing