This repository is concieved to provide aid in literature reiviews to Optimization researchers by offering an up-to-date list of literature and corresponding summaries.
If this repository has been useful to you in your research, please cite it using the cite this repository option available in Github. This repository would not have been possible without these open-source contributors. Thanks! 💖
- Legend
- Survey Papers
- First-order Optimizers
- Second-order Optimizers
- Other Optimization-related Research
Symbol | Meaning |
---|---|
📤 | Summary |
💻 | Code |
-
An overview of gradient descent optimization algorithms Sebastian Ruder; 2016
-
Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers Robin M. Schmidt, Frank Schneider, Philipp Hennig; 2020
-
Nesterov Accelerated Gradient momentum 📤 💻 Yuri Nesterov; Unknown
-
KOALA: A Kalman Optimization Algorithm with Loss Adaptivity 📤 💻 Aram Davtyan, Sepehr Sameni, Llukman Cerkezi, Givi Meishvilli, Adam Bielski, Paolo Favaro; 2021
- Adam: A Method for Stochastic Optimization 📤 💻 Diederik P. Kingma, Jimmy Ba; 2014
- Shampoo: Preconditioned Stochastic Tensor Optimization 📤 💻 Vineet Gupta, Tomer Koren, Yoram Singer
- Gradient Centralization: A New Optimization Technique for Deep Neural Networks 📤 💻 Hongwei Yong, Jianqiang Huang, Xiansheng Hua, Lei Zhang; 2020
-
On Empirical Comparisons of Optimizers for Deep Learning 📤 Dami Choi, Christopher J. Shallue, Zachary Nado, Jaehoon Lee, Chris J. Maddison, George E. Dahl; 2019
-
Adam Can Converge Without Any Modification on Update Rules 📤 Yushun Zhang, Congliang Chen, Naichen Shi, Ruoyu Sun, Zhi-Quan Luo; 2022
- Gradient Descent: The Ultimate Optimizer 📤 💻 Kartik Chandra, Audrey Xie, Jonathan Ragan-Kelley, Erik Meijer; 2019