PSGD (Preconditioned SGD) is a general purpose (mathematical and stochastic, convex and nonconvex) 2nd order optimizer. It reformulates a wide range of preconditioner estimation and Hessian fitting problems as a family of strongly convex Lie group optimization problems.
Notations:
Criterion | Solution | Notes |
---|---|---|
Reduces to secant equation |
||
Reduces to Newton's method when |
||
|
||
Reduces to AdaGrad family, e.g., Adam(W), RMSProp, Shampoo, |
Note that
Table II: Implemented Lie group preconditioners with storage and computation numbers for $\theta={\rm vec}(\Theta)$ with $\Theta\in\mathbb{R}^{m\times m}$
Lie Group | Update of |
Storages | Computations | Notes |
---|---|---|---|---|
See class Newton; set keep_invQ=True to calculate |
||||
Triangular matrices | See class Newton; set keep_invQ=False to make |
|||
Diagonal matrices, |
See class either LRA with rank_of_approximation=0 or XMat for implementations. | |||
|
|
See class Affine for implementations; deprecated and upgraded to class Kron. | ||
|
|
See class Kron for implementations (a superset of Affine). | ||
|
See class LRA for implementations; typically |
|||
Scaling or normalization | Similar to Kron, but certain |
With class Kron, small preconditioner_max_size or preconditioner_max_skew makes large |
For AdaGrad like gradient whitening preconditioner, we simply replace pair
The default behavior is to keep
This script generates the following plot showing the typical behaviors of different preconditioner fitting methods.
- With a static and noise-free Hessian-vector product model, both BFGS and PSGD converge linearly to the optimal preconditioner while closed-form solution
$P=\left(E[hh^T]\right)^{-0.5}$ only converges sublinearly with rate$\mathcal{O}(1/t)$ . - With a static additive noisy Hessian-vector model
$h=Hv+\epsilon$ , BFGS diverges easily. With a constant step size$\mu$ , the steady-state fitting errors of PSGD are proportional to$\mu$ . - With a time-varying Hessian
$H_{t+1}=H_t + uu^T$ and$u\sim\mathcal{U}(0,1)$ , PSGD locks onto good preconditioner estimations quicker than BFGS without a divergence stage. The closed-form solution$P=\left(E[hh^T]\right)^{-0.5}$ is not good at tracking due to its sublinear rate of convergence.
Optimizers with the criteria in Table I and preconditioner forms in Table II are wrapped into classes XMat, LRA (or UVd), Newton, Affine and Kron for easy use. The Affine family is obsolete now as Kron is a superset of it. Among them, LRA and Kron are the two most useful ones.
Three main differences from torch.optim.SGD:
- The loss to be minimized is passed through as a closure to the optimizer to support more dynamic behaviors, notably, Hessian-vector product approximation with finite difference method when the 2nd order derivatives are unavailable. The closure should return a loss or an iterator with its first element as the loss.
- Momentum here is the moving average of gradient so that its setting is decoupled from the learning rate, which is always normalized in PSGD.
- As any other regularizations, (coupled) weight decay should be explicitly realized by adding
$L2$ regularization to the loss. Similarly, decoupled weight decay is not included inside the PSGD implementations. We recommend to randomize the regularization term, e.g., replacing the$L2$ one for a parameter$p$ , say$0.5 \lambda \cdot {\rm sum}(p^2)$ , with${\rm rand}() \cdot \lambda\cdot {\rm sum}(p^2)$ .
A few more details. The Hessian-vector products are calculated as a vector-jacobian-product (vjp), i.e.,
Rosenbrock function: see how simple to apply PSGD to convex and stochastic optimizations. The most important three settings are: preconditioner_init_scale (unnormalized), lr_params (normalized) and lr_preconditioner (normalized).
LeNet5 CNN: PSGD on convolutional neural network training with the classic LeNet5 for MNIST digits recognition. Also see this for another implementation and comparison with Shampoo (PSGD generalizes significantly better).
Vision transformer: CIFAR image recognition with a tiny transformer. PSGD converges significantly faster and generalizes better than Adam(W). Check here for sample results.
Generative pre-trained transformer: A tiny GPT model for the WikiText-103 Dataset. PSGD also converges faster and generalizes better than Adam(W). Check here for sample results.
Delayed XOR with RNN: demonstration of PSGD on gated recurrent neural network (RNN) learning with the delayed XOR problem proposed in the LSTM paper. Most optimizers can't crack this fundamental problem with either LSTM or the vanilla RNN, while PSGD can, with either LSTM or simple RNN (also see this and this with simple RNNs).
Logistic regression: a large-scale logistic regression problem. PSGD outperforms LM-BFGS, "the algorithm of choice" for this type of problem.
Tensor rank decomposition: demonstrate the usage of all preconditioners on the tensor rank decomposition problem. It's a classic math optimization problem and PSGD outperforms BFGS again.
PSGD vs approximated closed-form solutions: this example show that most closed-form solutions, e.g., KFAC, Shampoo, CASPR, are approximate even for
Preconditioner fitting in Lie groups: see how multiplicative updates work in Lie groups for different types of preconditioners:
Preconditioner estimation efficiency and numerical stability: a playground to compare PSGD with BFGS and closed-form solution
How PSGD generalizes so well: We know SGD generalizes! This one serves as a good toy example illustrating it in the view of information theory. Starting from the same initial guesses, PSGD tends to find minima with smaller train cross entropy and flatter Hessians than Adam. Thus shorter total description lengths for the train data and model parameters. See sample results. Similarly, this example shows that PSGD also generalizes better than Shampoo.
Wrapping as affine models: this demo shows how to wrap torch.nn.functional.conv2d as an affine Conv2d class by putting weights and bias together. Another one on wrapping torch._VF.rnn_tanh as an affine RNN class. It's tedious and also maybe unnecessary as the Kron preconditioners natively support tensors of any shape. Still, reformulating our model as a list of affine transforms can make the best use of Kron preconditioners and typically improves performance.
- Preconditioned stochastic gradient descent, arXiv:1512.04202, 2015. (General ideas of PSGD, preconditioner fitting criteria and Kronecker product preconditioners.)
- Preconditioner on matrix Lie group for SGD, arXiv:1809.10232, 2018. (Focus on affine Lie group preconditioners. Note that feature normalization or whitening (per batch or layer) is an affine transform.)
- Black box Lie group preconditioners for SGD, arXiv:2211.04422, 2022. (Mainly about the LRA preconditioner. I also have prepared these supplementary materials for detailed math derivations.)
- Stochastic Hessian fittings with Lie groups, arXiv:2402.11858, 2024. (Convergence properties of PSGD, also a good summary of PSGD. The Hessian fitting problem is convex in the Euclidean space and the manifold of SPD matrices, but it's strongly convex only in the quotient set
${\rm GL}(n, \mathbb{R})/R_{\rm polar}$ , or group${\rm GL}(n, \mathbb{R})$ if we don't care$Q$ 's rotation ambiguity.) - Curvature-informed SGD via general purpose Lie-group preconditioners, arXiv:2402.04553, 2024. (Plenty of benchmark results and analyses for PSGD vs. other optimizers.)
- There are a few more efficient and specialized PSGD code: Evan's JAX and Torch, Lucas' Heavyball. Also my outdated and unmaintained Tensorflow code: TF 1.x and TF 2.x. I don't optimize my implementations. My code is in plain style and aligned with math equations.