Gradient Descent Optimizers ML Frameworks
Implementations and variants of optimization algorithms (SGD, Adam, RMSprop, etc.) for training neural networks. Does NOT include hyperparameter tuning tools, learning rate schedulers as standalone tools, or general black-box optimization frameworks.
There are 67 gradient descent optimizers frameworks tracked. 7 score above 50 (established tier). The highest-rated is metaopt/torchopt at 61/100 with 625 stars and 7,268 monthly downloads.
Get all 67 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=ml-frameworks&subcategory=gradient-descent-optimizers&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Framework | Score | Tier |
|---|---|---|---|
| 1 |
metaopt/torchopt
TorchOpt is an efficient library for differentiable optimization built upon PyTorch. |
|
Established |
| 2 |
SimplexLab/TorchJD
Library for Jacobian descent with PyTorch. It enables the optimization of... |
|
Established |
| 3 |
opthub-org/pytorch-bsf
PyTorch implementation of Bezier simplex fitting |
|
Established |
| 4 |
pytorch/xla
Enabling PyTorch on XLA Devices (e.g. Google TPU) |
|
Established |
| 5 |
clovaai/AdamP
AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant... |
|
Established |
| 6 |
nschaetti/EchoTorch
A Python toolkit for Reservoir Computing and Echo State Network... |
|
Established |
| 7 |
gpauloski/kfac-pytorch
Distributed K-FAC preconditioner for PyTorch |
|
Established |
| 8 |
stanford-centaur/PyPantograph
A Machine-to-Machine Interaction System for Lean 4. |
|
Emerging |
| 9 |
lean-dojo/LeanDojo-v2
LeanDojo-v2 is an end-to-end framework for training, evaluating, and... |
|
Emerging |
| 10 |
xiaoyuxie-vico/PyDimension
Dimensionless learning |
|
Emerging |
| 11 |
kach/gradient-descent-the-ultimate-optimizer
Code for our NeurIPS 2022 paper |
|
Emerging |
| 12 |
kozistr/pytorch_optimizer
optimizer & lr scheduler & loss function collections in PyTorch |
|
Emerging |
| 13 |
NoteDance/optimizers
This project implements optimizers for TensorFlow and Keras, which can be... |
|
Emerging |
| 14 |
Tony-Y/pytorch_warmup
Learning Rate Warmup in PyTorch |
|
Emerging |
| 15 |
nlesc-dirac/pytorch
Improved LBFGS and LBFGS-B optimizers in PyTorch. |
|
Emerging |
| 16 |
augustepoiroux/LeanInteract
LeanInteract: A Python Interface for Lean 4 |
|
Emerging |
| 17 |
OptimalFoundation/nadir
Nadir: Cutting-edge PyTorch optimizers for simplicity & composability! π₯ππ» |
|
Emerging |
| 18 |
ildoonet/pytorch-gradual-warmup-lr
Gradually-Warmup Learning Rate Scheduler for PyTorch |
|
Emerging |
| 19 |
locuslab/optnet
OptNet: Differentiable Optimization as a Layer in Neural Networks |
|
Emerging |
| 20 |
JGalego/torchlib
Deep learning meets Lean4 π₯β |
|
Emerging |
| 21 |
facebookresearch/theseus
A library for differentiable nonlinear optimization |
|
Emerging |
| 22 |
Axect/pytorch-scheduler
A comprehensive, research-driven collection of learning rate schedulers for... |
|
Emerging |
| 23 |
evanatyourservice/kron_torch
An implementation of PSGD Kron second-order optimizer for PyTorch |
|
Emerging |
| 24 |
j-w-yun/optimizer-visualization
Visualize Tensorflow's optimizers. |
|
Emerging |
| 25 |
sail-sg/Adan
Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models |
|
Emerging |
| 26 |
100/Solid
π― A comprehensive gradient-free optimization framework written in Python |
|
Emerging |
| 27 |
kiligon/spotax
CLI tool for running JAX training on Google Cloud Spot TPUs with automatic... |
|
Emerging |
| 28 |
lixilinx/psgd_torch
Pytorch implementation of preconditioned stochastic gradient descent (Kron... |
|
Emerging |
| 29 |
muooon/EmoNavi
An emotion-driven optimizer that feels loss and navigates accordingly. |
|
Emerging |
| 30 |
warner-benjamin/optimi
Fast, Modern, and Low Precision PyTorch Optimizers |
|
Emerging |
| 31 |
ayaka14732/tpu-starter
Everything you want to know about Google Cloud TPU |
|
Emerging |
| 32 |
gugarosa/otorchmizer
π¦ Otorchmizer is a PyTorch-based library consisting of meta-heuristic... |
|
Emerging |
| 33 |
gugugu12138/AdaptoFlux
An algorithm that implements intelligence based on a Method pool (a... |
|
Emerging |
| 34 |
team-approx-bayes/ivon
IVON optimizer for neural networks based on variational learning. |
|
Emerging |
| 35 |
tianrui-qi/ADMM-for-SVM
Alternating Direction Method of Multipliers for Support Vector Machine |
|
Experimental |
| 36 |
thieu1995/GrafoRVFL
GrafoRVFL: A Gradient-Free Optimization Framework for Boosting Random Vector... |
|
Experimental |
| 37 |
nanowell/AdEMAMix-Optimizer-Pytorch
The AdEMAMix Optimizer: Better, Faster, Older. |
|
Experimental |
| 38 |
ltatzel/PyTorchHessianFree
PyTorch implementation of the Hessian-free optimizer |
|
Experimental |
| 39 |
IMvision12/AdEMAMix-Optimizer-Keras
A Keras 3 Implementation of AdEMAMix Optimizer |
|
Experimental |
| 40 |
MoFHeka/xla-launcher
XLA Launcher is a high-performance, lightweight C++ library designed to... |
|
Experimental |
| 41 |
instadeepai/sebulba
πͺ The Sebulba architecture to scale reinforcement learning on Cloud TPUs in JAX |
|
Experimental |
| 42 |
SirRob1997/Crowded-Valley---Results
This repository contains the results for the paper: "Descending through a... |
|
Experimental |
| 43 |
wassname/viz_torch_optim
Videos of deep learning optimizers moving on 3D problem-landscapes |
|
Experimental |
| 44 |
yinleung/FSGDM
[ICLR 2025] On the Performance Analysis of Momentum Method: A Frequency... |
|
Experimental |
| 45 |
OpenEnvision-Lab/ScalingOPT
ScalingOPT [LLM] |
|
Experimental |
| 46 |
e-sensing/torchopt
R implementation of advanced optimizers for torch |
|
Experimental |
| 47 |
bangyen/leansharp
Formal verification of Z-Score filtered Sharpness-Aware Minimization (SAM)... |
|
Experimental |
| 48 |
Brokttv/optimizers-from-scratch
training models with different optimizers using NumPy only. Featuring SGD,... |
|
Experimental |
| 49 |
ChrisPinedaSanhueza/nested-learning-optimizer
π Optimize TensorFlow models with the Nested Learning Optimizer for improved... |
|
Experimental |
| 50 |
thetechdude124/Adam-Optimization-From-Scratch
πImplementing the ADAM optimizer from the ground up with PyTorch and... |
|
Experimental |
| 51 |
fabian-sp/MoMo
MoMo: Momentum Models for Adaptive Learning Rates |
|
Experimental |
| 52 |
Gunale0926/Grams
Grams: Gradient Descent with Adaptive Momentum Scaling (ICLR 2025 Workshop) |
|
Experimental |
| 53 |
AroMorin/DNNOP
Deep Neural Network Optimization Platform with Gradient-based, Gradient-Free... |
|
Experimental |
| 54 |
aytugyuruk/optimizer-comparisions-training-with-limited-epochs
Optimizer Comparison Study - Empirical analysis of SGD vs Adam performance... |
|
Experimental |
| 55 |
nfocardoso/thermopt
Drop-in PyTorch optimizer that beats AdamW with lower variance |
|
Experimental |
| 56 |
AhmedMostafa16/EXAdam
Official implementation of EXAdam optimizer from the paper... |
|
Experimental |
| 57 |
adrienkegreisz/ano-optimizer
Lightweight and customizable optimizer compatible with PyTorch and TensorFlow. |
|
Experimental |
| 58 |
adrienkegreisz/ano-experiments
The source code of the ANO's paper β a robust optimizer for deep learning in... |
|
Experimental |
| 59 |
Figirs/Neural-Flow-Optimizer
A Python-based library for optimizing gradient descent in deep neural networks. |
|
Experimental |
| 60 |
smithhenryd/Lazy-Training
Yale S&DS 432 final project studying lazy training dynamics for... |
|
Experimental |
| 61 |
shreyansh26/ML-Optimizers-JAX
Toy implementations of some popular ML optimizers using Python/JAX |
|
Experimental |
| 62 |
nisheethjaiswal/ROLLING-DOWN-A-CROWDED-VALLEY-OF-OPTIMIZERS-DEVELOPMENTS-FROM-SGD
Deep Learning Optimizers |
|
Experimental |
| 63 |
wyzjack/AdaM3
[ICDM 2023] Momentum is All You Need for Data-Driven Adaptive Optimization |
|
Experimental |
| 64 |
tony-wade/optimizers
Extension optimizers for the PyTorch. |
|
Experimental |
| 65 |
motasemwed/optimization-algorithms-comparison
A practical comparison of classical optimization algorithms (GD, SGD,... |
|
Experimental |
| 66 |
NekkittAY/MAMGD_Optimizer
Gradient optimization method using exponential damping and second-order... |
|
Experimental |
| 67 |
imehranasgari/DL_Optimizer_RMSpropNesterov_Custom
Custom RMSprop optimizer with Nesterov momentum in pure Python/NumPy. Built... |
|
Experimental |