D. van Alphen1 ECE 455 – Lecture 12 Orthogonal Matrices Singular Value Decomposition (SVD) SVD for Image Compression Recall: If vectors a and b have the.

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad.
Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.
OCE301 Part II: Linear Algebra lecture 4. Eigenvalue Problem Ax = y Ax = x occur frequently in engineering analysis (eigenvalue problem) Ax =  x [ A.
Lecture 19 Singular Value Decomposition
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Symmetric Matrices and Quadratic Forms
Chapter 4.1 Mathematical Concepts
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Singular Value Decomposition COS 323. Underconstrained Least Squares What if you have fewer data points than parameters in your function?What if you have.
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
TFIDF-space  An obvious way to combine TF-IDF: the coordinate of document in axis is given by  General form of consists of three parts: Local weight.
Singular Value Decomposition
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Lecture 20 Empirical Orthogonal Functions and Factor Analysis.
Class 25: Question 1 Which of the following vectors is orthogonal to the row space of A?
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Dirac Notation and Spectral decomposition
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Stats & Linear Models.
5.1 Orthogonality.
Linear Algebra & Matrices MfD 2004 María Asunción Fernández Seara January 21 st, 2004 “The beginnings of matrices and determinants.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Orthogonal Matrices and Spectral Representation In Section 4.3 we saw that n  n matrix A was similar to a diagonal matrix if and only if it had n linearly.
SVD(Singular Value Decomposition) and Its Applications
Chapter 10 Review: Matrix Algebra
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
CS246 Topic-Based Models. Motivation  Q: For query “car”, will a document with the word “automobile” be returned as a result under the TF-IDF vector.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
AN ORTHOGONAL PROJECTION
Overview Definitions Basic matrix operations (+, -, x) Determinants and inverses.
Introduction to Matrices and Vectors Sebastian van Delden USC Upstate
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Scientific Computing Singular Value Decomposition SVD.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
Chapter 13 Discrete Image Transforms
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
MATRICES A rectangular arrangement of elements is called matrix. Types of matrices: Null matrix: A matrix whose all elements are zero is called a null.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
Linear Algebra Review.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
Lecture: Face Recognition and Feature Reduction
Singular Value Decomposition
CS485/685 Computer Vision Dr. George Bebis
Symmetric Matrices and Quadratic Forms
Lecture 13: Singular Value Decomposition (SVD)
Lecture 20 SVD and Its Applications
Presentation transcript:

D. van Alphen1 ECE 455 – Lecture 12 Orthogonal Matrices Singular Value Decomposition (SVD) SVD for Image Compression Recall: If vectors a and b have the same orientation (both row vectors or both column vectors, then a’b = b’a = dot(a,b) and dot(a, a) = length 2 (a)

D. van Alphen2 Orthogonal Matrices Definition: A square matrix is orthogonal (  ) if its columns are orthonormal (i.e., orthogonal and of unit-length). Example: Q = Let Q be an  matrix with columns q 1, q 2, …, q n. Then Q’ Q = Note (1) q i ’ q i = |q i | 2 = 1 (since vectors are unit-length); and (2) q i ’ q j = 0 (since vectors are orthogonal) Diagonal elements are 1 by note (1) below; off-diagonal elements are 0, by note (2) below)

D. van Alphen3 Orthogonal Matrices, continued So far: Q Q = I n Similarly, Q Q = I n Thus, Q -1 = Q More properties of orthogonal matrices: –If Q is orthogonal, so is Q. –The rows of any orthogonal matrix are also orthogonal. Example 1: Q = Q -1 = Q = Note: This particular Q is called a rotation matrix. Multiplying a (2, 1) vector x by Q will rotate x by angle .

D. van Alphen4 Orthogonal Matrices, continued Rotation Matrix Example Let x = and let  = 90  in the rotation matrix. Then x Q xQ x 90  Note: the length of the vector was not changed.

D. van Alphen5 Claim: Orthogonal Matrices Preserve Length Multiplying x by an orthogonal matrix Q does not change the length: | Q x | = | x | Proof: | Q x | 2 = (Q x) (Q x) = x Q Q x = x x = | x | 2 Example: Multiply [x y]’ by “plane rotation” matrix Q: z = Q | z | 2 = (x cos  - y sin  ) 2 + (x sin  + y cos  ) 2 = (x 2 cos 2  + y 2 sin 2  – 2xy sin  cos   (x 2 sin 2  + y 2 cos 2  + 2xy sin  cos  = (x 2 + y 2 )(sin 2  + cos 2  x 2 + y 2 = |[ x y]’ | 2 I, since Q -1 = Q

D. van Alphen6 Orthogonal Matrices, Continued Permutation Matrix Example Recall permutation matrix P ij was used in Gaussian Elimination to exchange (or swap) 2 rows. Example: to find the 3-by-3 permutation matrix that swaps rows 1 and 2 by pre-multiplying a matrix, we started with identity matrix I 3 and swapped rows 1 and 2: I 3 = Note that the columns of P 12 are unit-length, and orthogonal to each other; hence P 12 is an orthogonal matrix. Claim: All permutation matrices are orthogonal. R 1  R 2

D. van Alphen7 Singular Value Decomposition (SVD) A way to factor matrices: A = U  V –where A is any matrix of size (m, n); –U and V are orthogonal matrices; U contains the “left singular vectors” V contains the “right singular vectors”; and –  is a diagonal matrix, containing the “singular values” on the diagonal Closely related to Principal Component Analysis Mostly used for data compression – particularly for image compression

D. van Alphen8 SVD Format Suppose that A is a matrix of size (m, n) and rank r. Then the SVD A = U  V = [u 1 … u r … u m ] (m, m) (m, n) (n, n)

D. van Alphen9 SVD Factorization: Finding  and V Start with: A = U  V (1) Note that U U = I, since U is orthogonal. Hence left-multiply both sides of equation (1) by A : A A = (U  V) U  V = V  U U  V = V   V Note that   is a diagonal matrix, with the elements  i 2 on the diagonal The eigenvalues of (A A) are the  i 2 ’s, and the eigenvectors are the columns of matrix V. I

D. van Alphen10 SVD Example Let A = A’ A = MATLAB Code to find eigenvalues and eigenvectors of A’A: >> [v d] = eig([5 -3; -3 5]) v = d =           sqrt     sqrt( 

D. van Alphen11 SVD Factorization: Finding U Start with: A = U  V (1) Right-multiply both sides of equation (1) by A : A A = U  V (U  V) = U  V V  U = U   U As before,   is a diagonal matrix, with the elements  i 2 on the diagonal The eigenvectors of (A A) are the columns of matrix U. I

D. van Alphen12 SVD Example, continued Recall A = A A’ = MATLAB Code to find eigenvalues and eigenvectors of AA’: >> [U d] = eig([8 0; 0 2]) U = d =

D. van Alphen13 SVD Example, continued Recall A = = U  V –where U = Hence, A = U  V

D. van Alphen14 SVD for Data Compression Recall A = U  V = [u 1 … u r … u m ]  A =  1 u 1 v 1 +  2 u 2 v 2 + … +  r u r v r (2) If A has rank r, then only r terms are required in the above formula for an exact representation of A. To approximate A, we use fewer than r terms in equation (2). –Since the  i ’s are sorted from largest to smallest value in , eliminating the last “few” terms in (2) have a small effect on the image.

D. van Alphen15 Example: Clown Image (200, 320) Original Image, Uncompressed; Full rank: r = 200 Compressed Image using only the first 20 singular values

D. van Alphen16 Clown Image Compression: MATLAB Code % program clown_svd load clown % uncompressed image, stored in X figure(1); colormap(map); size(X), r = rank(X), image(X) [U D V] = svd(X);% svd of the clown image d = diag(D);% creates vector with singular values figure(2); semilogy(d); % plot singular values k = 20;% # of terms to be used in approx. dd = d(1:k); % pick off 20 largest singular values UU = U(:,1:k); VV = V(:,1:k); XX = UU*diag(dd)*VV'; figure(3); colormap(map) size(XX), image(XX) % compressed image

D. van Alphen17 Singular Values for Clown Image Index of the 200 singular values Singular values For this image: 20 th singular value is 4% of 1 st singular value A =  1 u 1 v 1 + … +  20 u 20 v 20 +  21 u 21 v 21 + … +  200 u 200 v 200 Negligible due to small singular values,  i

D. van Alphen18 Clown Image: How Much Compression? Original Image: 200 x 320 = 64,000 pixel values to store Compressed Image: A =  1 u 1 v 1 +  2 u 2 v 2 + … +  20 u 20 v 20 –Each u i has m = 200 components; –Each v i has n = 320 components; –Each  i is a scalar  1 component Fraction of storage required for compressed image –10,420/64,000 = 16.3% General formula for fraction of storage required: k * (m + n + 1)/(m*n) where k = # of terms included in approximation Total = 20 * ( ) = 10,420 values to store

D. van Alphen19 MATLAB Commands for SVD >> [U S V] = svd(A) –Returns all the singular values in S, and the corresponding vectors in U and V; >> [U S V] = svds(A, k) –Returns the k largest singular values in S, and the corresponding vectors in U and V; >> [U S V] = svds(A) –Returns the 6 largest singular values in S, and the corresponding vectors in U and V; >> Z = U*S*V’ –Regenerates A or  A (the approximation to A), depending upon whether or not you used all of the singular values