-
Notifications
You must be signed in to change notification settings - Fork 73
Linear Solvers and Eigenvalues
The function inv(A) returns the inverse of the square matrix A. Matrix inversion is an O(n^3) operation. If given a second argument, inv
places the result in that second matrix.
> val a = rand(4,4) 0.75591 0.51697 0.33410 0.25645 0.91592 0.19109 0.64584 0.77816 0.82078 0.93623 0.25824 0.88392 0.58401 0.066242 0.58547 0.12139 > val b = inv(a) 4.5832 2.1498 -2.7010 -3.7955 -1.2931 -2.3139 2.0868 2.3700 -4.0738 -2.1043 2.3012 5.3390 -1.6960 1.0694 0.75683 -0.54570 > a * b 1.0000 2.3842e-07 -3.1292e-07 3.1292e-07 2.3842e-07 1.0000 -4.7684e-07 4.1723e-07 3.5763e-07 4.7684e-07 1.0000 2.0862e-07 1.7881e-07 2.8312e-07 -3.0547e-07 1.0000 > val c = zeros(4,4) > inv(a, c) > c 4.5832 2.1498 -2.7010 -3.7955 -1.2931 -2.3139 2.0868 2.3700 -4.0738 -2.1043 2.3012 5.3390 -1.6960 1.0694 0.75683 -0.54570
There are two eigenvalue functions for real symmetric matrices seig
and feig
. Both can return two values: a column vector of eigenvalues, and a square matrix whose columns are the corresponding eigenvectors. The seig
function takes two arguments, the matrix A, and a boolean flag which determines whether the eigenvectors are returned. seig
uses reduction to tridiagonal form followed by calculation of eigenvalues. The second function is feig
which uses a faster, divide-and-conquer algorithm for eigenvalues and eigenvectors. seig
always returns eigenvectors and does not have a flag to turn this off. Both functions work on both single precision (FMat) and double precision (DMat) arguments.
scala> aa aa: BIDMat.DMat = 2.9260 1.6648 2.5727 2.2300 1.6648 1.3799 1.6853 1.4514 2.5727 1.6853 3.1019 2.3740 2.2300 1.4514 2.3740 2.2825 scala> val (v, m)= seig(aa, true) v: BIDMat.DMat = 0.27301 0.31132 0.45670 8.6492 m: BIDMat.DMat = -0.13731 0.40833 -0.71388 0.55207 -0.12716 -0.90024 -0.21440 0.35697 -0.50598 0.14970 0.62654 0.57360 0.84200 0.020595 0.22771 0.48863 scala> aa * m /@ m res31: BIDMat.DMat = 0.27301 0.31132 0.45670 8.6492 0.27301 0.31132 0.45670 8.6492 0.27301 0.31132 0.45670 8.6492 0.27301 0.31132 0.45670 8.6492
Both algorithms take O(n^3) steps, and feig is about twice as fast as seig when retrieving the eignvectors. feig should be used only with positive definite matrices. seig requires about twice as many flops (3 n^3 vs 4/3 n^3) as matrix inversion, but the calculation is less uniform and the MFlop/s achieved is lower.
Note that these functions always return two amatrices. If the second argument to seig is false (dont return eigenvectors), the second returned result should be discarded.
There is a third eigenvalue routine which can deal with non-symmetric matrices. This routine is called geig
and returns complex results.
> val a = rand(3,3) 0.12405 0.44249 0.56792 0.48652 0.16163 0.97430 0.87441 0.56278 0.50188 > val (x,y) = geig(a) > x 1.5904-3.7253e-08i -0.40140+0.13727i -0.40140-0.13727i > y -0.43595-0.057657i -0.35285+0.19000i -0.40351-0.28035i -0.59819-0.079114i -0.040044-0.79776i -0.15932+0.96626i -0.65949-0.087222i 0.31174+0.36049i 0.42970-0.39594i
The function chol computes a Cholesky factorization A=L*L^T of a symmetric matrix. The factor is returned a a lower-triangular matrix.
scala> aa res33: BIDMat.DMat = 2.9260 1.6648 2.5727 2.2300 1.6648 1.3799 1.6853 1.4514 2.5727 1.6853 3.1019 2.3740 2.2300 1.4514 2.3740 2.2825 scala> val b = chol(aa) b: BIDMat.DMat = 1.7105 0 0 0 0.97325 0.65778 0 0 1.5040 0.33675 0.85233 0 1.3037 0.27752 0.37518 0.60420 scala> b * b.t res34: BIDMat.DMat = 2.9260 1.6648 2.5727 2.2300 1.6648 1.3799 1.6853 1.4514 2.5727 1.6853 3.1019 2.3740 2.2300 1.4514 2.3740 2.2825
Cholesky factorization is an O(n^3) calculation, and is about twice as fast as matrix inversion.
The trisolve function solves triangular systems of the form T x = y for x. trisolve accepts a string argument that specifies the type of calculation. The mode string has three letters:
- Char1 is "U" or "L" for upper or lower triangular matrices.
- Char2 = "N", "T" or "C" for A not-transposed, transposed or conjugate respectively.
- Char3 = "N" or "U" whether the leading diagonal is non-unit "N" or unit "U" respectively.
scala> b res35: BIDMat.DMat = 1.7105 0 0 0 0.97325 0.65778 0 0 1.5040 0.33675 0.85233 0 1.3037 0.27752 0.37518 0.60420 scala> val y = rand(4,1) y: BIDMat.DMat = 0.047913 0.65338 0.58416 0.26086 scala> val x = trisolve(b, y, "LNN") x: BIDMat.DMat = 0.028011 0.95187 0.25987 -0.22728 scala> b*x res36: BIDMat.DMat = 0.047913 0.65338 0.58416 0.26086
Triangular matrix solving is an O(n^2) operation for vectors x and y, or O(n^2 k) for k x n matrices x, y.
There are thick and thin QR decomposition routines. QRdecomp
operates on FMat, DMat or CMat arguments and returns a standard QR decomposition of matching type. That is given m x n a, it returns m x m orthonormal Q and m x n upper-triangular R. i.e.
> val (q, r) = QRdecomp(a)The QRdecomp routine will also store results into the second and third arguments if given:
> QRdecomp(a, q, r)
Three related routines produce a "thin" QR-decomposition. QRdecompt
also operates on FMat, DMat, or CMat inputs and returns a thin QR-decomposition of matching type. i.e. Given m x n input A with m >= n, return m x n orthonormal Q and n x n upper triangular R such that A = Q*R. These routines can also be called with second and third arguments and will store the factorization into those arguments.
There is an experimental block-GMRES implementation. GMRES is an iterative solver for non-symmetric matrices. Block GMRES is a communication-minimizing algorithm that computes powers <math>A^p*V</math> with n x s matrices <math>V</math> rather than n x 1 vectors. The arguments are:
def blgmres(A:FMat, b:FMat, nrst:Int, m:Int, s:Int, tol:Float)where
A
is the input matrix, b
, the RHS of the equation A X = b
, nrst
is the number of restarts, m
the number of powers to keep, s
the dimension of the blocks <math>A^p*V</math>, tol
the tolerance (stopping criterion).