# Singular value decompositions

The canoe in question.

One way to view linear algebra is as an attempt to generalize some of the results about integers to results about matrices.

Using one singular value, so each column of the image is a multiple of a single vector.

For example, we like to think of the identity matrix as the number 1.  We would also like to think of division as multiplying by an inverse, supposing the inverse exists.  If it does not, then this “matrix division” will not be defined, and we might think of this singular matrix as being like the number 0.  This may be why it is satisfying that the determinant of such a matrix is zero (and one might reflect on matrices other than the identity with determinant 1, and what role they play).

Another goal-by-analogy in linear algebra is to provide a canonical factorization of matrices, a la prime number factorization in the integers.  Some attempts to do this include

• A = LDU, the factorization of a matrix A into the product of a lower diagonal (L), diagonal (D) and upper diagonal (U) matrix.  This is pleasant in that it works for any matrix, square or rectangular.
• The same canoe using two singular values.

A = QLQ-1, where Q are matrices of eigenvectors and L is the diagonal matrix with the associated eigenvalues.  This decomposition requires that A is square, and that it has a full set of linearly independent eigenvectors.

The singular value decomposition

As a nontechnical intro to the singular value decomposition, let me write it down (A = USV), where A is a (possibly rectangular) matrix, is a matrix whose columns are the eigenvectors of MMT, V is a matrix whose columns are the eigenvectors of MTM, and S is a diagonal matrix whose entries are the square roots of eigenvalues of MMT or MTM.

With five singular values, you might be able to guess there is a canoe there.

All this is well and good, but the really compelling reason that I care about the singular value decomposition of a matrix is that it provides a fast, easy, and intuitive way to compress images.First, remember that a greyscale image, I, is just a two dimensional array of numbers between 0 and 1.  That is, we associate each pixel with the intensity at that point.  Now the way to think about the singular values (i.e., diagonal of S) is that they corresponds to the vector that matters the most.  That is to say: if you want to recreate an image just using various multiples of a single column vector, the best choice is the column vector of U corresponding to the largest singular value of the image.  More generally, if you want to recreate an image just using a linear combination of n column vectors, the best choices are the columns of U corresponding to the largest n singular values.

Letting the singular values get to 267 (which is the total number of singular values) for the canoe.

As a quick calculation, if your image is 1000 x 2000, and you compress it by keeping 100 (out of 1000) singular values, then you now need to store only the first 100 columns of U (which will be 1000 x 1000), the first 100 rows of V (which will be 2000 x 2000), and the 100 singular values.

Total numbers stored = (1000 x 100) + (2000 x 100) + 100 = 100,000 + 200,000 + 100 = 300,100.

Compare this to the original image, which had 1000 x 2000 = 2,000,000 numbers, and you are doing pretty well.  I’ll note that JPEG makes a much better image (it shouldn’t be surprising that this isn’t the best, since it is not really natural to break an image up by columns of pixels), and that all these images were produced with a fast, easy code in MATLAB:

I took an image, say “flowers.jpg”, and entered the code

``` I = imread('flowers.jpg'); I = rgb2gray(I); [U,S,V] = svd(I); C = U(:,1:n)*S(1:n,1:n)*V(:,1:n)'; ```

Then the matrix C is the n-column singular value decomposition of the image.

A second image, with successively more singular values.