ildoonet/pytorch-gradual-warmup-lr
Gradually-Warmup Learning Rate Scheduler for PyTorch
Implements a composable learning rate scheduler that linearly increases the learning rate over a warmup phase before delegating to a secondary scheduler (e.g., StepLR, ExponentialLR, CosineAnnealingLR) for the main training phase. Chains with PyTorch's native `torch.optim.lr_scheduler` interface through an `after_scheduler` parameter, enabling flexible scheduler composition without modifying optimizer code. Based on the large-batch SGD warmup strategy from Goyal et al., it prevents training instability during initial iterations when using aggressive learning rates or large batch sizes.
991 stars. No commits in the last 6 months.
Stars
991
Forks
126
Language
Python
License
MIT
Category
Last pushed
Oct 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ildoonet/pytorch-gradual-warmup-lr"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
metaopt/torchopt
TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
SimplexLab/TorchJD
Library for Jacobian descent with PyTorch. It enables the optimization of neural networks with...
clovaai/AdamP
AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights (ICLR 2021)
nschaetti/EchoTorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on...
gpauloski/kfac-pytorch
Distributed K-FAC preconditioner for PyTorch