Gradient Descent Optimizers ML Frameworks

Implementations and variants of optimization algorithms (SGD, Adam, RMSprop, etc.) for training neural networks. Does NOT include hyperparameter tuning tools, learning rate schedulers as standalone tools, or general black-box optimization frameworks.

There are 67 gradient descent optimizers frameworks tracked. 7 score above 50 (established tier). The highest-rated is metaopt/torchopt at 61/100 with 625 stars and 7,268 monthly downloads.

Get all 67 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=ml-frameworks&subcategory=gradient-descent-optimizers&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Framework Score Tier
1 metaopt/torchopt

TorchOpt is an efficient library for differentiable optimization built upon PyTorch.

61
Established
2 SimplexLab/TorchJD

Library for Jacobian descent with PyTorch. It enables the optimization of...

58
Established
3 opthub-org/pytorch-bsf

PyTorch implementation of Bezier simplex fitting

58
Established
4 pytorch/xla

Enabling PyTorch on XLA Devices (e.g. Google TPU)

57
Established
5 clovaai/AdamP

AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant...

55
Established
6 nschaetti/EchoTorch

A Python toolkit for Reservoir Computing and Echo State Network...

53
Established
7 gpauloski/kfac-pytorch

Distributed K-FAC preconditioner for PyTorch

51
Established
8 stanford-centaur/PyPantograph

A Machine-to-Machine Interaction System for Lean 4.

49
Emerging
9 lean-dojo/LeanDojo-v2

LeanDojo-v2 is an end-to-end framework for training, evaluating, and...

47
Emerging
10 xiaoyuxie-vico/PyDimension

Dimensionless learning

47
Emerging
11 kach/gradient-descent-the-ultimate-optimizer

Code for our NeurIPS 2022 paper

45
Emerging
12 kozistr/pytorch_optimizer

optimizer & lr scheduler & loss function collections in PyTorch

45
Emerging
13 NoteDance/optimizers

This project implements optimizers for TensorFlow and Keras, which can be...

43
Emerging
14 Tony-Y/pytorch_warmup

Learning Rate Warmup in PyTorch

43
Emerging
15 nlesc-dirac/pytorch

Improved LBFGS and LBFGS-B optimizers in PyTorch.

40
Emerging
16 augustepoiroux/LeanInteract

LeanInteract: A Python Interface for Lean 4

40
Emerging
17 OptimalFoundation/nadir

Nadir: Cutting-edge PyTorch optimizers for simplicity & composability! πŸ”₯πŸš€πŸ’»

40
Emerging
18 ildoonet/pytorch-gradual-warmup-lr

Gradually-Warmup Learning Rate Scheduler for PyTorch

40
Emerging
19 locuslab/optnet

OptNet: Differentiable Optimization as a Layer in Neural Networks

39
Emerging
20 JGalego/torchlib

Deep learning meets Lean4 πŸ”₯βœ…

39
Emerging
21 facebookresearch/theseus

A library for differentiable nonlinear optimization

38
Emerging
22 Axect/pytorch-scheduler

A comprehensive, research-driven collection of learning rate schedulers for...

38
Emerging
23 evanatyourservice/kron_torch

An implementation of PSGD Kron second-order optimizer for PyTorch

38
Emerging
24 j-w-yun/optimizer-visualization

Visualize Tensorflow's optimizers.

38
Emerging
25 sail-sg/Adan

Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models

38
Emerging
26 100/Solid

🎯 A comprehensive gradient-free optimization framework written in Python

36
Emerging
27 kiligon/spotax

CLI tool for running JAX training on Google Cloud Spot TPUs with automatic...

35
Emerging
28 lixilinx/psgd_torch

Pytorch implementation of preconditioned stochastic gradient descent (Kron...

35
Emerging
29 muooon/EmoNavi

An emotion-driven optimizer that feels loss and navigates accordingly.

34
Emerging
30 warner-benjamin/optimi

Fast, Modern, and Low Precision PyTorch Optimizers

32
Emerging
31 ayaka14732/tpu-starter

Everything you want to know about Google Cloud TPU

32
Emerging
32 gugarosa/otorchmizer

🐦 Otorchmizer is a PyTorch-based library consisting of meta-heuristic...

32
Emerging
33 gugugu12138/AdaptoFlux

An algorithm that implements intelligence based on a Method pool (a...

30
Emerging
34 team-approx-bayes/ivon

IVON optimizer for neural networks based on variational learning.

30
Emerging
35 tianrui-qi/ADMM-for-SVM

Alternating Direction Method of Multipliers for Support Vector Machine

29
Experimental
36 thieu1995/GrafoRVFL

GrafoRVFL: A Gradient-Free Optimization Framework for Boosting Random Vector...

29
Experimental
37 nanowell/AdEMAMix-Optimizer-Pytorch

The AdEMAMix Optimizer: Better, Faster, Older.

29
Experimental
38 ltatzel/PyTorchHessianFree

PyTorch implementation of the Hessian-free optimizer

28
Experimental
39 IMvision12/AdEMAMix-Optimizer-Keras

A Keras 3 Implementation of AdEMAMix Optimizer

27
Experimental
40 MoFHeka/xla-launcher

XLA Launcher is a high-performance, lightweight C++ library designed to...

26
Experimental
41 instadeepai/sebulba

πŸͺ The Sebulba architecture to scale reinforcement learning on Cloud TPUs in JAX

26
Experimental
42 SirRob1997/Crowded-Valley---Results

This repository contains the results for the paper: "Descending through a...

25
Experimental
43 wassname/viz_torch_optim

Videos of deep learning optimizers moving on 3D problem-landscapes

24
Experimental
44 yinleung/FSGDM

[ICLR 2025] On the Performance Analysis of Momentum Method: A Frequency...

23
Experimental
45 OpenEnvision-Lab/ScalingOPT

ScalingOPT [LLM]

23
Experimental
46 e-sensing/torchopt

R implementation of advanced optimizers for torch

23
Experimental
47 bangyen/leansharp

Formal verification of Z-Score filtered Sharpness-Aware Minimization (SAM)...

22
Experimental
48 Brokttv/optimizers-from-scratch

training models with different optimizers using NumPy only. Featuring SGD,...

22
Experimental
49 ChrisPinedaSanhueza/nested-learning-optimizer

πŸš€ Optimize TensorFlow models with the Nested Learning Optimizer for improved...

22
Experimental
50 thetechdude124/Adam-Optimization-From-Scratch

πŸ“ˆImplementing the ADAM optimizer from the ground up with PyTorch and...

21
Experimental
51 fabian-sp/MoMo

MoMo: Momentum Models for Adaptive Learning Rates

20
Experimental
52 Gunale0926/Grams

Grams: Gradient Descent with Adaptive Momentum Scaling (ICLR 2025 Workshop)

20
Experimental
53 AroMorin/DNNOP

Deep Neural Network Optimization Platform with Gradient-based, Gradient-Free...

20
Experimental
54 aytugyuruk/optimizer-comparisions-training-with-limited-epochs

Optimizer Comparison Study - Empirical analysis of SGD vs Adam performance...

19
Experimental
55 nfocardoso/thermopt

Drop-in PyTorch optimizer that beats AdamW with lower variance

19
Experimental
56 AhmedMostafa16/EXAdam

Official implementation of EXAdam optimizer from the paper...

18
Experimental
57 adrienkegreisz/ano-optimizer

Lightweight and customizable optimizer compatible with PyTorch and TensorFlow.

16
Experimental
58 adrienkegreisz/ano-experiments

The source code of the ANO's paper – a robust optimizer for deep learning in...

15
Experimental
59 Figirs/Neural-Flow-Optimizer

A Python-based library for optimizing gradient descent in deep neural networks.

14
Experimental
60 smithhenryd/Lazy-Training

Yale S&DS 432 final project studying lazy training dynamics for...

14
Experimental
61 shreyansh26/ML-Optimizers-JAX

Toy implementations of some popular ML optimizers using Python/JAX

14
Experimental
62 nisheethjaiswal/ROLLING-DOWN-A-CROWDED-VALLEY-OF-OPTIMIZERS-DEVELOPMENTS-FROM-SGD

Deep Learning Optimizers

14
Experimental
63 wyzjack/AdaM3

[ICDM 2023] Momentum is All You Need for Data-Driven Adaptive Optimization

12
Experimental
64 tony-wade/optimizers

Extension optimizers for the PyTorch.

12
Experimental
65 motasemwed/optimization-algorithms-comparison

A practical comparison of classical optimization algorithms (GD, SGD,...

11
Experimental
66 NekkittAY/MAMGD_Optimizer

Gradient optimization method using exponential damping and second-order...

11
Experimental
67 imehranasgari/DL_Optimizer_RMSpropNesterov_Custom

Custom RMSprop optimizer with Nesterov momentum in pure Python/NumPy. Built...

11
Experimental