pytorch/xla

Enabling PyTorch on XLA Devices (e.g. Google TPU)

57
/ 100
Established

Compiles PyTorch models to XLA intermediate representation for execution on TPUs and other accelerators, using lazy tensor tracing to defer computation and optimize across device boundaries. Provides both eager and graph modes with distributed training support via SPMD (single program, multiple data) and DDP, plus integration with performance optimization techniques like automatic mixed precision and FSDP. Supports CPU and custom accelerators through PJRT plugins while maintaining standard PyTorch APIs.

2,756 stars.

No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

2,756

Forks

566

Language

C++

License

Last pushed

Dec 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/pytorch/xla"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.