Presentation is loading. Please wait.

Presentation is loading. Please wait.

D. van Alphen1 ECE 455 – Lecture 12 Orthogonal Matrices Singular Value Decomposition (SVD) SVD for Image Compression Recall: If vectors a and b have the.

Similar presentations


Presentation on theme: "D. van Alphen1 ECE 455 – Lecture 12 Orthogonal Matrices Singular Value Decomposition (SVD) SVD for Image Compression Recall: If vectors a and b have the."— Presentation transcript:

1 D. van Alphen1 ECE 455 – Lecture 12 Orthogonal Matrices Singular Value Decomposition (SVD) SVD for Image Compression Recall: If vectors a and b have the same orientation (both row vectors or both column vectors, then a’b = b’a = dot(a,b) and dot(a, a) = length 2 (a)

2 D. van Alphen2 Orthogonal Matrices Definition: A square matrix is orthogonal (  ) if its columns are orthonormal (i.e., orthogonal and of unit-length). Example: Q = Let Q be an  matrix with columns q 1, q 2, …, q n. Then Q’ Q = Note (1) q i ’ q i = |q i | 2 = 1 (since vectors are unit-length); and (2) q i ’ q j = 0 (since vectors are orthogonal) Diagonal elements are 1 by note (1) below; off-diagonal elements are 0, by note (2) below)

3 D. van Alphen3 Orthogonal Matrices, continued So far: Q Q = I n Similarly, Q Q = I n Thus, Q -1 = Q More properties of orthogonal matrices: –If Q is orthogonal, so is Q. –The rows of any orthogonal matrix are also orthogonal. Example 1: Q = Q -1 = Q = Note: This particular Q is called a rotation matrix. Multiplying a (2, 1) vector x by Q will rotate x by angle .

4 D. van Alphen4 Orthogonal Matrices, continued Rotation Matrix Example Let x = and let  = 90  in the rotation matrix. Then x Q xQ x 90  Note: the length of the vector was not changed.

5 D. van Alphen5 Claim: Orthogonal Matrices Preserve Length Multiplying x by an orthogonal matrix Q does not change the length: | Q x | = | x | Proof: | Q x | 2 = (Q x) (Q x) = x Q Q x = x x = | x | 2 Example: Multiply [x y]’ by “plane rotation” matrix Q: z = Q | z | 2 = (x cos  - y sin  ) 2 + (x sin  + y cos  ) 2 = (x 2 cos 2  + y 2 sin 2  – 2xy sin  cos   (x 2 sin 2  + y 2 cos 2  + 2xy sin  cos  = (x 2 + y 2 )(sin 2  + cos 2  x 2 + y 2 = |[ x y]’ | 2 I, since Q -1 = Q

6 D. van Alphen6 Orthogonal Matrices, Continued Permutation Matrix Example Recall permutation matrix P ij was used in Gaussian Elimination to exchange (or swap) 2 rows. Example: to find the 3-by-3 permutation matrix that swaps rows 1 and 2 by pre-multiplying a matrix, we started with identity matrix I 3 and swapped rows 1 and 2: I 3 = Note that the columns of P 12 are unit-length, and orthogonal to each other; hence P 12 is an orthogonal matrix. Claim: All permutation matrices are orthogonal. R 1  R 2

7 D. van Alphen7 Singular Value Decomposition (SVD) A way to factor matrices: A = U  V –where A is any matrix of size (m, n); –U and V are orthogonal matrices; U contains the “left singular vectors” V contains the “right singular vectors”; and –  is a diagonal matrix, containing the “singular values” on the diagonal Closely related to Principal Component Analysis Mostly used for data compression – particularly for image compression

8 D. van Alphen8 SVD Format Suppose that A is a matrix of size (m, n) and rank r. Then the SVD A = U  V = [u 1 … u r … u m ] (m, m) (m, n) (n, n)

9 D. van Alphen9 SVD Factorization: Finding  and V Start with: A = U  V (1) Note that U U = I, since U is orthogonal. Hence left-multiply both sides of equation (1) by A : A A = (U  V) U  V = V  U U  V = V   V Note that   is a diagonal matrix, with the elements  i 2 on the diagonal The eigenvalues of (A A) are the  i 2 ’s, and the eigenvectors are the columns of matrix V. I

10 D. van Alphen10 SVD Example Let A = A’ A = MATLAB Code to find eigenvalues and eigenvectors of A’A: >> [v d] = eig([5 -3; -3 5]) v = -0.7071 -0.7071 -0.7071 0.7071 d = 2 0 0 8           sqrt     sqrt( 

11 D. van Alphen11 SVD Factorization: Finding U Start with: A = U  V (1) Right-multiply both sides of equation (1) by A : A A = U  V (U  V) = U  V V  U = U   U As before,   is a diagonal matrix, with the elements  i 2 on the diagonal The eigenvectors of (A A) are the columns of matrix U. I

12 D. van Alphen12 SVD Example, continued Recall A = A A’ = MATLAB Code to find eigenvalues and eigenvectors of AA’: >> [U d] = eig([8 0; 0 2]) U = 0 1 1 0 d = 2 0 0 8

13 D. van Alphen13 SVD Example, continued Recall A = = U  V –where U = Hence, A = U  V

14 D. van Alphen14 SVD for Data Compression Recall A = U  V = [u 1 … u r … u m ]  A =  1 u 1 v 1 +  2 u 2 v 2 + … +  r u r v r (2) If A has rank r, then only r terms are required in the above formula for an exact representation of A. To approximate A, we use fewer than r terms in equation (2). –Since the  i ’s are sorted from largest to smallest value in , eliminating the last “few” terms in (2) have a small effect on the image.

15 D. van Alphen15 Example: Clown Image (200, 320) Original Image, Uncompressed; Full rank: r = 200 Compressed Image using only the first 20 singular values

16 D. van Alphen16 Clown Image Compression: MATLAB Code % program clown_svd load clown % uncompressed image, stored in X figure(1); colormap(map); size(X), r = rank(X), image(X) [U D V] = svd(X);% svd of the clown image d = diag(D);% creates vector with singular values figure(2); semilogy(d); % plot singular values k = 20;% # of terms to be used in approx. dd = d(1:k); % pick off 20 largest singular values UU = U(:,1:k); VV = V(:,1:k); XX = UU*diag(dd)*VV'; figure(3); colormap(map) size(XX), image(XX) % compressed image

17 D. van Alphen17 Singular Values for Clown Image Index of the 200 singular values Singular values For this image: 20 th singular value is 4% of 1 st singular value A =  1 u 1 v 1 + … +  20 u 20 v 20 +  21 u 21 v 21 + … +  200 u 200 v 200 Negligible due to small singular values,  i

18 D. van Alphen18 Clown Image: How Much Compression? Original Image: 200 x 320 = 64,000 pixel values to store Compressed Image: A =  1 u 1 v 1 +  2 u 2 v 2 + … +  20 u 20 v 20 –Each u i has m = 200 components; –Each v i has n = 320 components; –Each  i is a scalar  1 component Fraction of storage required for compressed image –10,420/64,000 = 16.3% General formula for fraction of storage required: k * (m + n + 1)/(m*n) where k = # of terms included in approximation Total = 20 * (200+320 + 1) = 10,420 values to store

19 D. van Alphen19 MATLAB Commands for SVD >> [U S V] = svd(A) –Returns all the singular values in S, and the corresponding vectors in U and V; >> [U S V] = svds(A, k) –Returns the k largest singular values in S, and the corresponding vectors in U and V; >> [U S V] = svds(A) –Returns the 6 largest singular values in S, and the corresponding vectors in U and V; >> Z = U*S*V’ –Regenerates A or  A (the approximation to A), depending upon whether or not you used all of the singular values


Download ppt "D. van Alphen1 ECE 455 – Lecture 12 Orthogonal Matrices Singular Value Decomposition (SVD) SVD for Image Compression Recall: If vectors a and b have the."

Similar presentations


Ads by Google