SVD(Singular Value Decomposition) and Its Applications

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

Eigen Decomposition and Singular Value Decomposition
3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
PCA + SVD.
Solving Linear Systems (Numerical Recipes, Chap 2)
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Geometry (Many slides adapted from Octavia Camps and Amitabh Varshney)
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Computer Graphics Recitation 6. 2 Last week - eigendecomposition A We want to learn how the transformation A works:
3D Geometry for Computer Graphics
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Singular Value Decomposition COS 323. Underconstrained Least Squares What if you have fewer data points than parameters in your function?What if you have.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
3D Geometry for Computer Graphics
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Olga Sorkine’s slides Tel Aviv University. 2 Spectra and diagonalization A If A is symmetric, the eigenvectors are orthogonal (and there’s always an eigenbasis).
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Normal Estimation in Point Clouds 2D/3D Shape Manipulation, 3D Printing March 13, 2013 Slides from Olga Sorkine.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Summarized by Soo-Jin Kim
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Presented By Wanchen Lu 2/25/2013
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
AN ORTHOGONAL PROJECTION
SVD: Singular Value Decomposition
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
CSE 185 Introduction to Computer Vision Face Recognition.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
3D Geometry for Computer Graphics Class 3. 2 Last week - eigendecomposition A We want to learn how the transformation A works:
Chapter 13 Discrete Image Transforms
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Unsupervised Learning II Feature Extraction
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
Eigen & Singular Value Decomposition
LECTURE 10: DISCRIMINANT ANALYSIS
Singular Value Decomposition
SVD: Physical Interpretation and Applications
Principal Component Analysis
Recitation: SVD and dimensionality reduction
Parallelization of Sparse Coding & Dictionary Learning
Feature space tansformation methods
Symmetric Matrices and Quadratic Forms
LECTURE 09: DISCRIMINANT ANALYSIS
Lecture 13: Singular Value Decomposition (SVD)
Principal Component Analysis
Symmetric Matrices and Quadratic Forms
Presentation transcript:

SVD(Singular Value Decomposition) and Its Applications Joon Jae Lee 2006. 01.10

The plan today Singular Value Decomposition Basic intuition Formal definition Applications

LCD Defect Detection Defect types of TFT-LCD : Point, Line, Scratch, Region

Shading Scanners give us raw point cloud data How to compute normals to shade the surface? normal

Hole filling

Eigenfaces Same principal components analysis can be applied to images

Video Copy Detection same image content different source same video source different image content AVI and MPEG face image Hue histogram Hue histogram

3D animations Connectivity is usually constant (at least on large segments of the animation) The geometry changes in each frame  vast amount of data, huge filesize! 13 seconds, 3000 vertices/frame, 26 MB

Geometric analysis of linear transformations We want to know what a linear transformation A does Need some simple and “comprehendible” representation of the matrix of A. Let’s look what A does to some vectors Since A(v) = A(v), it’s enough to look at vectors v of unit length A

The geometry of linear transformations A linear (non-singular) transform A always takes hyper-spheres to hyper-ellipses. A A

The geometry of linear transformations Thus, one good way to understand what A does is to find which vectors are mapped to the “main axes” of the ellipsoid. A A

Geometric analysis of linear transformations If we are lucky: A = V  VT, V orthogonal (true if A is symmetric) The eigenvectors of A are the axes of the ellipse A

Symmetric matrix: eigen decomposition In this case A is just a scaling matrix. The eigen decomposition of A tells us which orthogonal axes it scales, and by how much: A 1 2 1

General linear transformations: SVD In general A will also contain rotations, not just scales: 1 1 1 2 A

General linear transformations: SVD 1 1 1 2 A orthonormal orthonormal

SVD more formally SVD exists for any matrix Formal definition: For square matrices A  Rnn, there exist orthogonal matrices U, V  Rnn and a diagonal matrix , such that all the diagonal values i of  are non-negative and =

SVD more formally The diagonal values of  (1, …, n) are called the singular values. It is accustomed to sort them: 1  2 …  n The columns of U (u1, …, un) are called the left singular vectors. They are the axes of the ellipsoid. The columns of V (v1, …, vn) are called the right singular vectors. They are the preimages of the axes of the ellipsoid. =

SVD is the “working horse” of linear algebra There are numerical algorithms to compute SVD. Once you have it, you have many things: Matrix inverse  can solve square linear systems Numerical rank of a matrix Can solve least-squares systems PCA Many more…

Matrix inverse and solving linear systems So, to solve

Matrix rank The rank of A is the number of non-zero singular values n 1 2 n m =

Numerical rank If there are very small singular values, then A is close to being singular. We can set a threshold t, so that numeric_rank(A) = #{i| i > t} If rank(A) < n then A is singular. It maps the entire space Rn onto some subspace, like a plane (so A is some sort of projection).

Solving least-squares systems We tried to solve Ax=b when A was rectangular: Seeking solutions in least-squares sense: A x b =

Solving least-squares systems We proved that when A is full-rank, (normal equations). So: 1 1 12 = n n n2

Solving least-squares systems Substituting in the normal equations: 1/12 1 1/1 = 1/n2 n 1/n

Pseudoinverse The matrix we found is called the pseudoinverse of A. Definition using reduced SVD: If some of the i are zero, put zero instead of 1/I .

Pseudo-inverse Pseudoinverse A+ exists for any matrix A. Its properties: If A is mn then A+ is nm Acts a little like real inverse:

Solving least-squares systems When A is not full-rank: ATA is singular There are multiple solutions to the normal equations: Thus, there are multiple solutions to the least-squares. The SVD approach still works! In this case it finds the minimal-norm solution:

PCA – the general idea PCA finds an orthogonal basis that best represents given data set. The sum of distances2 from the x’ axis is minimized. y y’ x’ x

PCA – the general idea y v2 v1 x y y x x This line segment approximates the original data set The projected data set approximates the original data set

PCA – the general idea PCA finds an orthogonal basis that best represents given data set. PCA finds a best approximating plane (again, in terms of distances2) z 3D point set in standard basis y x

For approximation In general dimension d, the eigenvalues are sorted in descending order: 1  2  …  d The eigenvectors are sorted accordingly. To get an approximation of dimension d’ < d, we take the d’ first eigenvectors and look at the subspace they span (d’ = 1 is a line, d’ = 2 is a plane…)

For approximation To get an approximating set, we project the original data points onto the chosen subspace: xi = m + 1v1 + 2v2 +…+ d’vd’ +…+dvd Projection: xi’ = m + 1v1 + 2v2 +…+ d’vd’ +0vd’+1+…+ 0 vd

Principal components Eigenvectors that correspond to big eigenvalues are the directions in which the data has strong components (= large variance). If the eigenvalues are more or less the same – there is no preferable direction. Note: the eigenvalues are always non-negative. Think why…

Principal components S looks like this: S looks like this: There’s no preferable direction S looks like this: Any vector is an eigenvector There is a clear preferable direction S looks like this:  is close to zero, much smaller than .

Application: finding tight bounding box An axis-aligned bounding box: agrees with the axes y maxY minX maxX x minY

Application: finding tight bounding box Oriented bounding box: we find better axes! x’ y’

Application: finding tight bounding box Oriented bounding box: we find better axes!

Scanned meshes

Point clouds Scanners give us raw point cloud data How to compute normals to shade the surface? normal

Point clouds Local PCA, take the third vector

Eigenfaces Same principal components analysis can be applied to images

Eigenfaces Each image is a vector in R250300 Want to find the principal axes – vectors that best represent the input database of images

Reconstruction with a few vectors Represent each image by the first few (n) principal components

Face recognition Given a new image of a face, w  R250300 Represent w using the first n PCA vectors: Now find an image in the database whose representation in the PCA basis is the closest: The angle between w and w’ is the smallest w w’ w w’

SVD for animation compression Chicken animation

3D animations Each frame is a 3D model (mesh) Connectivity – mesh faces

3D animations Each frame is a 3D model (mesh) Connectivity – mesh faces Geometry – 3D coordinates of the vertices

3D animations Connectivity is usually constant (at least on large segments of the animation) The geometry changes in each frame  vast amount of data, huge filesize! 13 seconds, 3000 vertices/frame, 26 MB

Animation compression by dimensionality reduction The geometry of each frame is a vector in R3N space (N = #vertices) 3N  f

Animation compression by dimensionality reduction Find a few vectors of R3N that will best represent our frame vectors! U 3Nf  ff VT ff VT =

Animation compression by dimensionality reduction The first principal components are the important ones u1 u2 u3 VT =

Animation compression by dimensionality reduction Approximate each frame by linear combination of the first principal components The more components we use, the better the approximation Usually, the number of components needed is much smaller than f. u1 u2 u3 = 1 + 2 + 3

Animation compression by dimensionality reduction Compressed representation: The chosen principal component vectors Coefficients i for each frame ui Animation with only 2 principal components Animation with 20 out of 400 principal components

LCD Defect Detection Defect types of TFT-LCD : Point, Line, Scratch, Region

Pattern Elimination Using SVD

Pattern Elimination Using SVD Cutting the test images

Pattern Elimination Using SVD Singular value determinant for the image reconstruction

Pattern Elimination Using SVD Defect detection using SVD