pytorch/xla
Enabling PyTorch on XLA Devices (e.g. Google TPU)
Compiles PyTorch models to XLA intermediate representation for execution on TPUs and other accelerators, using lazy tensor tracing to defer computation and optimize across device boundaries. Provides both eager and graph modes with distributed training support via SPMD (single program, multiple data) and DDP, plus integration with performance optimization techniques like automatic mixed precision and FSDP. Supports CPU and custom accelerators through PJRT plugins while maintaining standard PyTorch APIs.
2,756 stars.
Stars
2,756
Forks
566
Language
C++
License
—
Category
Last pushed
Dec 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/pytorch/xla"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
metaopt/torchopt
TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
SimplexLab/TorchJD
Library for Jacobian descent with PyTorch. It enables the optimization of neural networks with...
clovaai/AdamP
AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights (ICLR 2021)
nschaetti/EchoTorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on...
gpauloski/kfac-pytorch
Distributed K-FAC preconditioner for PyTorch