Matrix Computation

Matrices are ubiquitous in statistics. But most statistical models and methods are only useful once turned into computer code, and this often means computing with matrices. It is easy to ruin a beautiful piece of theory by poor use of numerical linear algebra. This section covers some basic numerical linear algebra, including the important topics of efficiency and stability.

2.1 Efficiency in matrix computation

Recall the simple example

n <- 3000
A <- matrix(runif(n*n),n,n)
B <- matrix(runif(n*n),n,n)
y <- runif(n)
system.time(f0 <- A%*%B%*%y)    # case 1
##    user  system elapsed 
##   2.778   0.112   0.745
system.time(f1 <- A%*%(B%*%y))  # case 2
##    user  system elapsed 
##   0.067   0.051   0.029
What happened here? The answer is all to do with how the multiplications were ordered in the two cases, and the number of floating point operations (flop) required by the two alternative orderings.

  1. 1.

    In the first case was formed first, and the resulting matrix used to pre-multiply the vector .

  2. 2.

    In the second case, the vector was formed first, and was then pre-multiplied by .

Now we need to count the floating point operations. Try a couple of simple examples first

  1. 1.

    How many flop (+, -, *, /) are involved in the multiplication

  2. 2.

    How many operations are involved in the multiplication

Generalising

  1. 1.

    In terms of , what is the flop cost of forming ?

  2. 2.

    What is the cost of forming

Now write down the flop cost of the two ways of forming

  1. 1.

     

  2. 2.

     

Another simple matrix operation is the evaluation of the trace of a matrix product. Again different ways of computing the same quantity lead to radically different computation times. Consider the computation of where is and is .

n <- 5000; m <- 1000
A <- matrix(runif(n*m), n, m)
B <- matrix(runif(n*m), m, n)
benchmark(
    "sdAB" = { sum(diag(A %*% B)) },
    "sdBA" = { sum(diag(B %*% A)) },
    "sABt" = { sum(A*t(B)) },
    replications = 10, order=NULL,
    columns = c("test", "replications", "elapsed",
                "relative", "user.self", "sys.self"))
##   test replications elapsed relative user.self sys.self
## 1 sdAB           10   7.361    8.403    26.684    2.113
## 2 sdBA           10   2.031    2.318     7.159    0.584
## 3 sABt           10   0.876    1.000     0.709    0.359

  1. 1.

    The first method forms , at a flop cost of and then extracts the leading diagonal and sums it.

  2. 2.

    The second method uses the fact that . It forms at flop cost of and then extracts the leading diagonal of the result and sums it.

  3. 3.

    The third method makes direct use of the fact that , at a cost of . It is fastest (in principle) as no effort is wasted in calculating off diagonal elements of the target matrix (but here does come at the cost of computing a transpose).

Notice that method 1 is not just wasteful of flops, it also requires storage of an matrix, which is much larger than either or .

2.1.1 Flop counting in general

Clearly it is important to count flops, but it is also tedious to do a complete count for complex algorithms. In many cases an exact count is not necessary. Instead we simply need to know how the computation scales with problem size, as it becomes large. For this reason the leading order operation count is all that is usually reported. For example the naive computation of , above, required at least operations. As and tend to infinity this is dominated by a term proportional to , so we write that the computational cost is (‘order ’).

Note also that flops are not always defined in the same way. For some authors one flop is one floating point addition, subtraction, division or multiplication, but for others one flop is one multiplication/division with one addition/subtraction.

2.1.2 There is more to efficiency than flop counting

Flop count is clearly important, but it is not the only aspect of efficiency. With current computing technology (driven by the computer games market) a floating point operation often takes the same amount of time as an operation on an integer or a memory address. Hence how matrices are stored and addressed in memory is often as important as the flop count in determining efficiency.

As an example consider the multiplication of the elements of two matrices as part of a matrix multiplication performed in C. The code

C[i][j] += A[i][k]*B[k][j]

requires at least 6 address/integer calculations in order to figure out where the relevant elements of A, B and C are located, before the 2 floating point operations can be performed. To avoid this, efficient code stores pointers which keep a record of the memory locations of the ‘current’ relevant elements of A, B and C. The multiplication loops are then structured so as to minimise the work needed to update such pointers, for each new element multiplication.

To further complicate matters, computer processors often contain features that can be exploited to increase speed. For example, fetching data from the processor memory cache is much quicker than fetching data from general memory (not to be confused with the yet slower disk). An algorithm structured to break a problem into chunks that fit entirely into cache can therefore be very quick. Similarly, some processors can execute multiple floating point operations simultaneously, but careful coding is required to use such a feature.

To avoid having to re-write whole matrix algebra libraries to maximise speed on every new processor that comes along, standard numerical linear algebra libraries, such as LAPACK (used by R), adopt a highly structured design. Wherever possible, the high level routines in the library (e.g. for finding eigenvalues), are written so that most of the flop intensive work is performed by calling a relatively small set of routines for comparatively simple basic matrix operations (e.g. multiplying a matrix by a vector), known as the Basic Linear Algebra Subprograms (BLAS). The BLAS can then be tuned to make use of particular features of particular computers, without the library that depends on it needing to change. This provides one way of exploiting multi-core computers. Multi-threaded BLAS can gain efficiency by using multiple CPU cores, without any of the higher level code having to change.

Whatever language or library you are using for statistical computing, you should be using native BLAS and LAPACK libraries “under–the–hood” for your matrix computations. But it is important to be aware that not all BLAS and LAPACK libraries are the same, and “default” versions shipping with Linux distributions and similar operating systems are not always the best. You can very often swap out a default BLAS library and replace it with an optimised BLAS for your system without changing or recompiling any code. This can easily make a factor of 2 or more difference to the running time of codes for which matrix computation is a bottleneck, so it is well worth investigating how to do this for an easy way of speeding up your code.

On Linux, you can check which LAPACK and BLAS are being used by R using

La_library()
## [1] "/usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so"
extSoftVersion()["BLAS"]
##                                                      BLAS 
## "/usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3"
but these commands don’t work on Windows.

2.1.3 Efficiency conclusions

In conclusion:

  • Think about the flop count when computing with matrices (or computing in general).

  • If you are writing compiled code for manipulating matrices (or other arrays), then be aware of memory management efficiency as well as flops.

  • For optimal performance try and use libraries such as LAPACK, with a fast BLAS, where-ever possible.

2.2 Some terminology

Now consider some more interesting things to do with matrices. First recall the following basic terminology.

is an orthogonal matrix iff .

2.3 Cholesky decomposition: a matrix square root

Positive definite matrices are the ‘positive real numbers’ of matrix algebra. They have particular computational advantages and occur frequently in statistics, since covariance matrices are usually positive definite (and always positive semi-definite). So let’s start with positive definite matrices, and their matrix square roots. To see why matrix square roots might be useful, consider the following.

Example: Generating multivariate normal random variables. There exist very quick and reliable methods for simulating i.i.d. random deviates, but suppose that random vectors are required? Clearly we can generate vectors from . If we could find a square root matrix such that then

since the covariance matrix of is and .

In general the square root of a positive definite matrix is not uniquely defined, but there is a unique upper triangular square root of any positive definite matrix: its Cholesky factor. The algorithm for finding the Cholesky factor is easily derived. Consider a example first. The defining matrix equation is

If the component equations of this expression are written out and solved in the right order, then each contains only one unknown, as the following illustrates (unknown in grey)

Generalising to the case, and using the convention that , we have

Working through these equations in row order, from row one, and starting each row from its leading diagonal component ensures that all r.h.s. quantities are known at each step. Cholesky decomposition requires flops and square roots. In R it is performed by function chol, which calls routines in LAPACK or LINPACK.

Example (continued): The following simulates a random draw from

  V <- matrix(c(2,-1,1,-1,2,-1,1,-1,2),3,3)
  mu <- c(1,-1,3)

  R <- chol(V)             ## Cholesky factor of V
  z <- rnorm(3)            ## iid N(0,1) deviates
  y <- t(R)%*%z + mu       ## N(mu,V) deviates
  y
##           [,1]
## [1,]  1.621485
## [2,] -2.069023
## [3,]  3.939876
The following simulates 1000 such vectors, and checks their observed mean and covariance.
  Z <- matrix(rnorm(3000),3,1000) ## 1000 N(0,I) 3-vectors
  Y <- t(R)%*%Z + mu              ## 1000 N(mu,V) vectors
  ## and check that they behave as expected...
  rowMeans(Y)                     ## observed mu
## [1]  1.033903 -1.017923  2.970056
  (Y-mu)%*%t(Y-mu)/1000           ## observed V
##            [,1]      [,2]       [,3]
## [1,]  2.0160784 -1.008641  0.9942745
## [2,] -1.0086408  1.970159 -1.0416559
## [3,]  0.9942745 -1.041656  1.9563587
As a second application of the Cholesky decomposition, consider evaluating the log likelihood of and

If we were simply to invert to evaluate the final term, it would cost flops, and we would still need to evaluate the determinant. A Cholesky based approach is much better. It is easy to see that , where is the Cholesky factor of . So the final term in the log likelihood can be written as where . Notice that we don’t actually need to evaluate , but simply to solve for . To see how this is done, consider a case again:

If this system of equations is solved from the top down, then there is only one unknown (shown in grey) at each stage:

The generalisation of this forward substitution process to dimensions is obvious, as is the fact that it cost flops — much cheaper than explicit formation of , which would involve applying forward substitution to find each column of the unknown in the equation , at cost.

In R there is a routine forwardsolve for doing forward substitution with a lower triangular matrix (and a routine backsolve for performing the equivalent back substitution with upper triangular matrices). Before using it, we still need to consider . Again the Cholesky factor helps. From general properties of determinants we know that , but because is triangular . So given the Cholesky factor the calculation of the determinant is .

Example: The following evaluates the log likelihood of the covariance matrix V and mean vector mu, from the previous example, given an observed .

  y <- 1:3
  z <- forwardsolve(t(R),y-mu)
  logLik <- -length(y)*log(2*pi)/2 -
      sum(log(diag(R))) - sum(z*z)/2
  logLik
## [1] -6.824963
Note that Cholesky decomposition of a matrix that is not positive definite will fail. Even positive semi-definite will not work, since in that case a leading diagonal element of the Cholesky factor will become zero, so that computation of the off diagonal elements on the same row is impossible. Positive semi-definite matrices are reasonably common, so this is a practical problem. For the positive semi-definite case, it is possible to modify the Cholesky decomposition by pivoting: that is by re-ordering the rows/columns of the original matrix so that the zeroes end up at the end of the leading diagonal of the Cholesky factor, in rows that are all zero. This will not be pursued further here. Rather we will consider a more general matrix decomposition, which provides matrix square roots along with much else.

2.4 Eigen-decomposition (spectral-decomposition)

Any symmetric matrix, can be written as

where the matrix is orthogonal and is a diagonal matrix, with leading diagonal element (conventionally ). Post-multiplying both sides of the decomposition by we have

Considering this system one column at a time and writing for the column of we have:

i.e. the are the eigenvalues of , and the columns of are the corresponding eigenvectors. (2.1) is the eigen-decomposition or spectral decomposition of .

We will not go over the computation of the eigen-decomposition in detail, but it is not done using the determinant and characteristic equation of . One practical scheme is as follows: (i) the matrix is first reduced to tri-diagonal form using repeated pre and post multiplication by simple rank-one orthogonal matrices called Householder rotations, (ii) an iterative scheme called QR-iteration then pre-and post-multiplies the tri-diagonal matrix by even simpler orthogonal matrices, in order to reduce it to diagonal form. At this point the diagonal matrix contains the eigenvalues, and the product of all the orthogonal matrices gives . Eigen-decomposition is , but a good symmetric eigen routine is around 10 times as computationally costly as a Cholesky routine.

An immediate use of the eigen-decomposition is to provide an alternative characterisation of positive (semi-) definite matrices. All the eigenvalues of a positive (semi-)definite matrix must be positive (non-negative) and real. This is easy to see. Were some eigenvalue, to be negative (zero) then the corresponding eigenvector would result in being negative (zero). At the same time the existence of an such that is negative (zero) leads to a contradiction unless at least one eigenvalue is negative (zero) 1 .

2.4.1 Powers of matrices

Consider raising to the power .

where is just the diagonal matrix with as the leading diagonal element. This suggests that any real valued function, , of a real valued argument, which has a power series representation, has a natural generalisation to a symmetric matrix valued function of a symmetric matrix argument, i.e.

where denotes the diagonal matrix with leading diagonal element . E.g. .

2.4.2 Another matrix square root

For matrices with non-negative eigenvalues we can generalise to non-integer powers. For example it is readily verified that has the property that . Notice (i) that is not the same as the Cholesky factor, emphasising the non-uniqueness of matrix square roots, and (ii) that, unlike the Cholesky factor, is well defined for positive semi-definite matrices (and can therefore be applied to any covariance matrix). Also, the symmetry of this square root can sometimes be convenient.

2.4.3 Matrix inversion, rank and condition

Continuing in the same vein we can write

where the diagonal matrix has leading diagonal element . Clearly we have a problem if any of the are zero, for the matrix inverse will be undefined. A matrix with no zero eigen-values is termed full rank. A matrix with any zero eigenvalues is rank deficient and does not have an inverse. The number of non-zero eigenvalues is the rank of a matrix.

For some purposes it is sufficient to define a generalised inverse or pseudo-inverse when faced with rank deficiency, by finding the reciprocal of the non-zero eigenvalues, but setting the reciprocal of the zero eigenvalues to zero. We will not pursue this here.

It is important to understand the consequences of rank deficiency quite well when performing matrix operations involving matrix inversion/matrix equation solving. This is because near rank deficiency is rather easy to achieve by accident, and in finite precision arithmetic is as bad as rank deficiency. First consider trying to solve

for when is rank deficient. In terms of the eigen-decomposition the solution is

i.e. is rotated to become , the elements of are then divided by the eigenvalues, , and the reverse rotation is applied to the result. The problem is that is not defined if . This is just a different way of showing something that you already know: rank deficient matrices can not be inverted. But the approach also helps in the understanding of near rank deficiency and ill conditioning.

An illustrative example highlights the problem. Suppose that symmetric matrix has distinct eigenvalues ranging from 0.5 to 1, and one much smaller magnitude eigenvalue . Further suppose that we wish to compute with , on a machine that can represent real numbers to an accuracy of 1 part in . Now consider solving the system

for , where is the dominant eigenvector of . Clearly the correct solution is , but now consider computing the answer. As before we have a formal solution

but although analytically , the best we can hope for computationally is to get where the number are of the order of . For convenience, suppose that . Then, approximately, , and , which is not correct. Similar distortions would occur if we used any of the other first eigenvectors in place of : they all become distorted by a spurious component, with only itself escaping.

Now consider an arbitrary vector on the r.h.s. of (2.2). We can always write it as some weighted sum of the eigenvectors . This emphasises how bad the ill-conditioning problem is: all but one of s components are seriously distorted when multiplied by . By contrast multiplication by itself would lead only to distortion of the component of , and not the other eigen-vectors, but the component is the component that is so heavily shrunk by multiplication by that it makes almost no contribution to the result, unless we have the misfortune to choose a that is proportional to and nothing else.

A careful examination of the preceding argument reveals that what really matters in determining the seriousness of the consequences of near rank deficiency is the ratio of the largest magnitude to the smallest magnitude eigenvalues, i.e.

This quantity is a condition number for . Roughly speaking it is the factor by which errors in will be multiplied when solving for . Once begins to approach the reciprocal of the machine precision we’re in trouble 2 . A system with a large condition number is referred to as ill-conditioned. Orthogonal matrices have , which is why numerical analysts like them so much.

Example. We are now in a position to understand what happened when we tried to use the ‘normal equations’ to fit the simple quadratic model in the introduction. Let’s explicitly calculate the condition number of .

  set.seed(1)
  xx <- sort(runif(100))
  x <- xx + 100  # regenerated same x data
  X <- model.matrix(~x + I(x^2)) # and the model matrix
  XtX <- crossprod(X)          # form t(X)%*%X (with half the flop count)
  lambda <- eigen(XtX)$values
  lambda[1]/lambda[3]  # the condition number of X’X
## [1] 2.355538e+18
Of course this raises a couple of obvious questions. Could we have diagnosed the problem directly from ? And how does the lm function avoid this problem? Answers soon, but first consider a trick for reducing .

2.4.4 Preconditioning

The discussion of condition numbers given above was for systems involving unstructured matrices (albeit presented only in the context of symmetric matrices). Systems involving matrices with special structure are sometimes less susceptible to ill-conditioning than naive computation of the condition number would suggest. For example if is a diagonal matrix, then we can accurately solve for , however large is: overflow or underflow are the only limits.

This basic fact can sometimes be exploited to re-scale a problem to improve computational stability. As an example consider diagonal preconditioning of the computation of , above. For the original we have

solve(XtX)

## Error in solve.default(XtX): system is computationally singular: reciprocal condition number = 3.98647e-19

But now suppose that we create a diagonal matrix , with elements . Clearly

but turns out to have a much lower condition number than

  D <- diag(1/diag(XtX)^.5)
  DXXD <- D%*%XtX%*%D
  lambda <- eigen(DXXD)$values
  lambda[1]/lambda[3]
## [1] 429397494367
As a result we can now compute the inverse of
  XtXi <- D%*%solve(DXXD,D) ## computable inverse of X’X
  XtXi %*% XtX              ## how accurate?
##        (Intercept)             x       I(x^2)
## [1,]  9.999640e-01 -3.262288e-03 -0.352586841
## [2,]  2.920875e-07  1.000032e+00  0.002340679
## [3,] -1.151890e-09 -1.143870e-07  0.999992137
…not perfect, but better than no answer at all.

2.4.5 Asymmetric eigen-decomposition

If positive definite matrices are the positive reals of the square matrix system, and symmetric matrices are the reals, then asymmetric matrices are the complex numbers. As such they have complex eigenvectors and eigenvalues. It becomes necessary to distinguish right and left eigenvectors (one is no longer the transpose of the other), and the right and left eigenvector matrices are no longer orthogonal matrices (although they are still inverses of each other). Eigen-decomposition of asymmetric matrices is still , but is substantially more expensive than the symmetric case. Using R on the laptop used to prepare these notes, eigen-decomposition of a symmetric matrix took 0.5 seconds. The asymmetric equivalent took 3 seconds.

The need to compute with complex numbers somewhat reduces the practical utility of the eigen-decomposition in numerical methods for statistics. It would be better to have a decomposition that provides some of the useful properties of the eigen-decomposition without the inconvenience of complex numbers. The singular value decomposition meets this need.

2.5 Singular value decomposition

The singular values of matrix, () are the positive square roots of the eigenvalues of . If is positive semi-definite then its singular values are just its eigenvalues, of course. For symmetric matrices, eigenvalues and singular values differ only in sign, if at all. However the singular values are also well defined, and real, for matrices that are not even square, let alone symmetric.

Related to the singular values is the singular value decomposition

where has orthogonal columns and is the same dimension as , while is a diagonal matrix of the singular values (usually arranged in descending order), and is a orthogonal matrix.

The singular value decomposition is computed using a similar approach to that used for the symmetric eigen problem: orthogonal bi-diagonalization, followed by QR iteration at cost (it does not involve forming ) . It is more costly than symmetric eigen-decomposition, but cheaper than the asymmetric equivalent. Example R timings for a matrix are 0.5 seconds for symmetric eigen, 1 seconds for SVD and 3 seconds for asymmetric eigen.

n = 1000
XA = matrix(rnorm(n*n),ncol=n)
XXt = XA %*% t(XA)
system.time(chol(XXt))
##    user  system elapsed 
##   0.045   0.001   0.012
system.time(eigen(XXt, symmetric=TRUE))
##    user  system elapsed 
##   0.904   0.276   0.316
system.time(eigen(XA))
##    user  system elapsed 
##   4.818   2.073   1.736
system.time(svd(XA))
##    user  system elapsed 
##   1.794   0.437   0.582
system.time(qr(XA))
##    user  system elapsed 
##   0.309   0.103   0.223
The number of its non-zero singular values gives the rank of a matrix, and the SVD is the most reliable method for numerical rank determination (by examination of the size of the singular values relative to the largest singular value). In a similar vein, a general definition of condition number is the ratio of largest and smallest singular values: .

Example Continuing the simple regression disaster from the introduction, consider the singular values of X

  d <- svd(X)$d # get the singular values of X
  d
## [1] 1.010455e+05 2.662169e+00 6.474081e-05
Clearly, numerically X is close to being rank 2, rather than rank 3. Turning to the condition number…
  d[1]/d[3]
## [1] 1560769713
is rather large, especially since it is easy to show that the condition number of must be the square of this. So we now have a pretty clear diagnosis of the cause of the original problem.

In fact the SVD not only provides a diagnosis of the problem, but one possible solution. We can re-write the solution to the normal equations in terms of the SVD of X.

Notice two things…

  1. 1.

    The condition number of the system that we have ended up with is exactly the condition number of , i.e. the square root of the condition number involved in the direct solution of the normal equations.

  2. 2.

    Comparing the final RHS expression to the representation of an inverse in terms of its eigen-decomposition, it is clear that is a sort of pseudo-inverse .

The SVD has many uses. One interesting one is low rank approximation of matrices. In a well defined sense, the best rank approximation to a matrix can be expressed in terms of the SVD of as

where is with all but the largest singular values set to . Using this result to find low rank approximations to observed covariance matrices is the basis for several dimension reduction techniques in multivariate statistics (although, of course, a symmetric eigen decomposition is then equivalent to SVD). One issue with this sort of approximation is that the full SVD is computed, despite the fact that part of it is then to be discarded (don’t be fooled by routines that ask you how many eigen or singular vectors to return — I saved .1 of a second, out of 13 by getting R routine svd to only return the first columns of and ). Look up Lanczos methods and Krylov subspaces for approaches that avoid this sort of waste.

2.6 The QR decomposition

The SVD provided a stable solution to the linear model fitting example, but at a rather high computational cost, prompting the question of whether similar stability could be obtained without the full cost of SVD? The QR decomposition provides a positive answer. We can write any rectangular matrix () as the product of columns of an orthogonal matrix and an upper triangular matrix.

where is upper triangular and is of the same dimension as , with orthogonal columns (so , but ). The QR decomposition has a cost of , but it is about 1/3 of the cost of SVD.

Consider the linear modelling example again

Now the singular values of are the same as the singular values of , so this system has the same condition number as . i.e. the QR method provides the stability of SVD without the computational cost (although it still has about twice the cost of solving the normal equations via Cholesky decomposition). In fact the QR method for solving least squares estimates can be derived from first principles, without using the normal equations, in which case it also provides a very compact way of deriving much of the distributional theory required for linear modelling. R routine lm uses the QR approach, which is why, in the Introduction, it was able to deal with the case for which the normal equations failed. Of course it is not magic: the subsequent failure even of lm arose because I had caused the model matrix condition number to become so bad that even the QR approach fails (I should have centred , by subtracting its mean, or used orthogonal polynomials, via poly).

It is also worth noting the connection between the QR and Cholesky decompositions. If , then , where is upper triangular, so the QR decomposition of gives the Cholesky factorisation of “for free”. Note that most QR routines don’t bother ensuring that the diagonal elements of are positive. If the positivity of the diagonal elements is an issue (it rarely matters), it can easily be fixed by flipping the signs of the relevant rows of . This relationship between the QR and Cholesky can be used to directly construct the Cholesky decomposition of a covariance matrix in a stable way, without ever directly constructing the covariance matrix.

Another application of the QR decomposition is determinant calculation. If is square and then

since is triangular, while is orthogonal with determinant 1. Usually we need

which underflows to zero much less easily than .

2.7 Woodbury’s formula

There are a large number of matrix identities that are useful for simplifying computations, and the “matrix cookbook” is an excellent source of these. However, there is a set of identities relating to the updating of the inverse of a large matrix that is worthy of particular mention. There are several different variants, and each goes by different names. But names such as the Woodbury identity, Sherman-Morrison-Woodbury formula, Sherman-Morrison formula, etc., are all common names for various versions and special cases. Here I’ll just give the most commonly encountered variants.

The context is a (large) matrix, , and its inverse, . We suppose that is already known, but that now is subject to a low-rank update of the form , for both with . The problem is how to compute the updated inverse in an efficient way. Woodbury’s formula gives the answer.

Note that crucially, the inverse on the LHS is , whereas the new inverse required on the RHS is . This is what makes the result so useful.

The result is easy to prove by direct multiplication. First, pre-multiply the RHS by , multiply out, and simplify, to get . This tells us that the RHS is a right-inverse. Next post-multiply the RHS by , multiply out, simplify to get , thus confirming that the RHS is both a left- and right-inverse, and hence the inverse.

The form given above is sufficiently general for most applications in statistical computing, but it is worth noting a modest generalisation, often referred to as the Sherman-Morrison-Woodbury formula, which introduces an additional matrix, .

This can be proved by direct multiplication, as before. There is also a very commonly encountered special case, where and are just vectors, . In that case the identity is often referred to as the Sherman-Morrison formula, and reduces to

Writing it this way makes it clear that no new matrix inversion is required to perform the update.

There are numerous applications of these identities in statistical computing, but updating a precision matrix based on a low-rank update of a covariance matrix is a typical example.

2.8 Further reading on matrices

These notes can only scratch the surface of numerical linear algebra. Hopefully you now have a clearer idea of the importance of stability and efficiency in matrix computation, and some understanding of a few key decompositions and some of their applications. My hope is that when writing any code involving matrices, you’ll at least think about flops and condition numbers. If you want to find out more, here are some starting points. See section 6.3 as well.

  • Golub, GH and CF Van Loan (1996) Matrix Computations 3rd ed. John Hopkins. This is the reference for numerical linear algebra, but not the easiest introduction. It is invaluable if you are writing carefully optimised matrix computations, directly calling BLAS and LAPACK functions.

  • Watkins, DS (2002) Fundamentals of Matrix Computations 2nd ed. Wiley. This is a more accessible book, but still aimed at those who really want to understand the material (the author encourages you to write your own SVD routine, and with this book you can).

  • Press WH, SA Teukolsky, WT Vetterling and BP Flannery (2007) Numerical Recipes (3rd ed), Cambridge. Very easily digested explanation of how various linear algebra routines work, with code. But there are better versions of most of the algorithms in this book.

  • The GNU scientific library is at http://www.gnu.org/software/gsl/, and is a better bet than Numerical Recipes for actual code.

  • For state of the art numerical linear algebra use LAPACK: http://www.netlib.org/lapack/.

  • Matlab offers a convenient, but expensive, platform for matrix computation. For an introduction see e.g. Pratap, R (2006) Getting Started with Matlab, Oxford.

  • Octave is Matlab like free software: http://www.gnu.org/software/octave/.

  • Julia: http://julialang.org/ is another dynamically typed scripting language aimed at scientific computing.

  • Monahan, JF (2001) Numerical Methods of Statistics provides a useful treatment of numerical linear algebra for statistics.

  • Harville, DA (1997) Matrix Algebra From a Statistician’s Perspective, Springer, provides a wealth of helpful theoretical material.



  1. We can write for some vector . So for some .
  2. Because the condition number is so important in numerical computation, there are several methods for getting an approximate condition number more cheaply than via eigen decomposition — e.g. see ?kappa in R.