Keras/TF implementation of AdamW, SGDW, NadamW, Warm Restarts, and Learning Rate multipliers
-
Updated
Jan 6, 2022 - Python
Keras/TF implementation of AdamW, SGDW, NadamW, Warm Restarts, and Learning Rate multipliers
Implements https://arxiv.org/abs/1711.05101 AdamW optimizer, cosine learning rate scheduler and "Cyclical Learning Rates for Training Neural Networks" https://arxiv.org/abs/1506.01186 for PyTorch framework
Quasi Hyperbolic Rectified DEMON Adam/Amsgrad with AdaMod, Gradient Centralization, Lookahead, iterative averaging and decorrelated Weight Decay
Pytorch implementation of lookahead optimizer(https://arxiv.org/pdf/1907.08610.pdf) and RAdam(https://arxiv.org/pdf/1908.03265.pdf)
Nadir: Cutting-edge PyTorch optimizers for simplicity & composability! 🔥🚀💻
Literature survey of convex optimizers and optimisation methods for deep-learning; made especially for optimisation researchers with ❤️
Kaggle's plant disease image classification competition. Finetuning pre-trained CNN models, loss functions, and optimizers in order to achieve better results.
A repo that contains source code for my blog "Deep Learning Optimizers: A Comprehensive Guide for Beginners (2024)": https://medium.com/@shrirangmahajan123/optimizers-a-simple-beginners-guide-8ab6942880dd
Super-Convergence on CIFAR10
Add a description, image, and links to the adamw topic page so that developers can more easily learn about it.
To associate your repository with the adamw topic, visit your repo's landing page and select "manage topics."