alpa-projects/alpa

Training and serving large-scale neural networks with auto parallelization.

Archived
47
/ 100
Emerging

Combines JAX, XLA, and Ray to automatically apply data, operator, and pipeline parallelism across distributed clusters without manual sharding logic. Compiles single-device training code via a `@parallelize` decorator that generates optimal parallelization strategies, achieving linear scaling on billion-parameter models. Note: Core algorithms have been integrated into XLA's auto-sharding; this project is maintained as a research artifact.

3,188 stars. No commits in the last 6 months.

Archived Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

3,188

Forks

361

Language

Python

License

Apache-2.0

Last pushed

Dec 09, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/alpa-projects/alpa"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.