OptiML is a sklearn compatible implementation of Support Vector Machines and Deep Neural Networks, both with some of the most successful features according to the state of the art.
This work was motivated by the possibility of being able to solve the optimization problem deriving from the mathematical formulation of these models through a wide range of optimization algorithms object of study and developed for the Numerical Methods and Optimization course @ Department of Computer Science @ University of Pisa under the supervision of prof. Antonio Frangioni.
-
Numerical Optimization
- Unconstrained Optimization
- Line Search Methods
- 1st Order Methods
- Steepest Gradient Descent
- Conjugate Gradient
- Fletcher–Reeves formula
- Polak–Ribière formula
- Hestenes-Stiefel formula
- Dai-Yuan formula
- 2nd Order Methods
- Newton
- Quasi-Newton
- BFGS
- L-BFGS
- 1st Order Methods
- Stochastic Methods
- Stochastic Gradient Descent
- Momentum
- Polyak
- Nesterov
- Momentum
- Adam
- Momentum
- Polyak
- Nesterov
- Momentum
- AMSGrad
- Momentum
- Polyak
- Nesterov
- Momentum
- AdaMax
- Momentum
- Polyak
- Nesterov
- Momentum
- AdaGrad
- AdaDelta
- RMSProp
- Momentum
- Polyak
- Nesterov
- Momentum
- Schedules
- Step size
- Decaying
- Linear Annealing
- Repeater
- Momentum
- Sutskever Blend
- Step size
- Stochastic Gradient Descent
- Proximal Bundle with cvxpy interface to ecos, osqp, scs, etc.
- Line Search Methods
- Constrained Quadratic Optimization
- Box-Constrained Quadratic Methods
- Projected Gradient
- Frank-Wolfe or Conditional Gradient
- Active Set
- Interior Point
- Lagrangian Dual
- Augmented Lagrangian Dual
- Box-Constrained Quadratic Methods
- Unconstrained Optimization
-
Machine Learning
- Support Vector Machines
- Formulations
- Primal
- Wolfe Dual
- Lagrangian Dual
- Support Vector Classifier
- Support Vector Regression
- Kernels
- Optimizers (ad hoc)
- Formulations
- Neural Networks
- Neural Network Classifier
- Neural Network Regressor
- Losses
- Mean Absolute Error (L1 Loss)
- Mean Squared Error (L2 Loss)
- Binary Cross Entropy
- Categorical Cross Entropy
- Sparse Categorical Cross Entropy
- Regularizers
- L1 or Lasso
- L2 or Ridge or Tikhonov
- Activations
- Linear
- Sigmoid
- Tanh
- ReLU
- SoftMax
- Layers
- Fully Connected
- Initializers
- Xavier or Glorot (normal and uniform)
- He (normal and uniform)
- Support Vector Machines
pip install optiml
This software is released under the MIT License. See the LICENSE file for details.