Skip to content

Literature survey of convex optimizers and optimisation methods for deep-learning; made especially for optimisation researchers with ❤️

License

Notifications You must be signed in to change notification settings

OptimalFoundation/awesome-optimizers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

✨ Awesome Optimizers 📉

Awesome GitHub last commit (by committer) GitHub release (latest by date including pre-releases) GitHub Repo stars GitHub forks

This repository is concieved to provide aid in literature reiviews to Optimization researchers by offering an up-to-date list of literature and corresponding summaries.

If this repository has been useful to you in your research, please cite it using the cite this repository option available in Github. This repository would not have been possible without these open-source contributors. Thanks! 💖

Table of Contents

Legend

Symbol Meaning Count
📄 Paper 20
📤 Summary 3
💻 Code 0

Survey Papers

  1. An overview of gradient descent optimization algorithms Sebastian Ruder; 2016

  2. Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers Robin M. Schmidt, Frank Schneider, Philipp Hennig; 2020

First-order Optimizers

  1. Nesterov Accelerated Gradient momentum 📤 💻 Yuri Nesterov; Unknown

  2. KOALA: A Kalman Optimization Algorithm with Loss Adaptivity 📤 💻 Aram Davtyan, Sepehr Sameni, Llukman Cerkezi, Givi Meishvilli, Adam Bielski, Paolo Favaro; 2021

Momentum based Optimizers

  1. On the Momentum Term in Gradient Descent Learning Algorithms 📤 💻 Ning Qian; 1999

  2. Symbolic Discovery of Optimization Algorithms 📤 💻 Xiangning Chen, Chen Liang, Da Huang; 2023

  3. Demon: Improved Neural Network Training with Momentum Decay John Chen, Cameron Wolfe, Zhao Li, Anastasios Kyrillidis ; 2021

Adaptive Optimizers

  1. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization 📤 💻 John Duchi, Elad Hazan, Yoram Singer; 2011

  2. ADADELTA: An Adaptive Learning Rate Method 📤 💻 Matthew D. Zeiler; 2012

  3. RMSProp 📤 💻 Geoffrey Hinton; 2013

Adam Family of Optimizers

  1. Adam: A Method for Stochastic Optimization 📤 💻 Diederik P. Kingma, Jimmy Ba; 2014

  2. AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights 📤 💻 Byeongho Heo, Sanghyuk Chun, Seong Joon Oh, Dongyoon Han; 2020

  3. On the Variance of the Adaptive Learning Rate and Beyond 📤 💻 Liyuan Liu, Haoming Jiang, Pengcheng He; 2021

  4. AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar Tatikonda, Nicha Dvornek, Xenophon Papademetris, James S. Duncan ; 2020

  5. Momentum Centering and Asynchronous Update for Adaptive Gradient Methods Juntang Zhuang, Yifan Ding, Tommy Tang, Nicha Dvornek, Sekhar Tatikonda, James S. Duncan ; 2021

Second-order Optimizers

  1. Shampoo: Preconditioned Stochastic Tensor Optimization 📤 💻 Vineet Gupta, Tomer Koren, Yoram Singer; 2018

Other Optimisation-Related Research

General Improvements

  1. Gradient Centralization: A New Optimization Technique for Deep Neural Networks 📤 💻 Hongwei Yong, Jianqiang Huang, Xiansheng Hua, Lei Zhang; 2020

Optimizer Analysis and Meta-research

  1. On Empirical Comparisons of Optimizers for Deep Learning 📤 Dami Choi, Christopher J. Shallue, Zachary Nado, Jaehoon Lee, Chris J. Maddison, George E. Dahl; 2019

  2. Adam Can Converge Without Any Modification on Update Rules 📤 Yushun Zhang, Congliang Chen, Naichen Shi, Ruoyu Sun, Zhi-Quan Luo; 2022

Hyperparameter Tuning

  1. Gradient Descent: The Ultimate Optimizer 📤 💻 Kartik Chandra, Audrey Xie, Jonathan Ragan-Kelley, Erik Meijer; 2019

About

Literature survey of convex optimizers and optimisation methods for deep-learning; made especially for optimisation researchers with ❤️

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published