PCA + SVD.

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Chapter 9 Approximating Eigenvalues
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Dimensionality Reduction PCA -- SVD
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
Principal Components Analysis Babak Rasolzadeh Tuesday, 5th December 2006.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Principal Component Analysis
CENG 789 – Digital Geometry Processing 06- Rigid-Body Alignment Asst. Prof. Yusuf Sahillioğlu Computer Eng. Dept,, Turkey.
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Computer Graphics Recitation 5.
Iterative closest point algorithms
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 9(b) Principal Components Analysis Martin Russell.
TFIDF-space  An obvious way to combine TF-IDF: the coordinate of document in axis is given by  General form of consists of three parts: Local weight.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
3D Geometry for Computer Graphics
3D Geometry for Computer Graphics
E.G.M. PetrakisDimensionality Reduction1  Given N vectors in n dims, find the k most important axes to project them  k is user defined (k < n)  Applications:
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
DATA MINING LECTURE 7 Dimensionality Reduction PCA – SVD
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Normal Estimation in Point Clouds 2D/3D Shape Manipulation, 3D Printing March 13, 2013 Slides from Olga Sorkine.
SVD(Singular Value Decomposition) and Its Applications
Point set alignment Closed-form solution of absolute orientation using unit quaternions Berthold K. P. Horn Department of Electrical Engineering, University.
Empirical Modeling Dongsup Kim Department of Biosystems, KAIST Fall, 2004.
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
Presented By Wanchen Lu 2/25/2013
CSE554Laplacian DeformationSlide 1 CSE 554 Lecture 8: Laplacian Deformation Fall 2012.
Chapter 9 Superposition and Dynamic Programming 1 Chapter 9 Superposition and dynamic programming Most methods for comparing structures use some sorts.
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
Dimension reduction : PCA and Clustering Slides by Agnieszka Juncker and Chris Workman modified by Hanne Jarmer.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
Medical Image Analysis Dr. Mohammad Dawood Department of Computer Science University of Münster Germany.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
CENG 789 – Digital Geometry Processing 07- Rigid-Body Alignment Asst. Prof. Yusuf Sahillioğlu Computer Eng. Dept,, Turkey.
Principal Components Analysis ( PCA)
Unsupervised Learning II Feature Extraction
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
CSE 554 Lecture 8: Alignment
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
PREDICT 422: Practical Machine Learning
CENG 789 – Digital Geometry Processing 08- Rigid-Body Alignment
Review of Linear Algebra
LECTURE 10: DISCRIMINANT ANALYSIS
Lecture: Face Recognition and Feature Reduction
CSE 554 Lecture 9: Laplacian Deformation
Structure from motion Input: Output: (Tomasi and Kanade)
Singular Value Decomposition
Principal Component Analysis
Recitation: SVD and dimensionality reduction
Parallelization of Sparse Coding & Dictionary Learning
Feature space tansformation methods
Principal Components What matters most?.
LECTURE 09: DISCRIMINANT ANALYSIS
Lecture 13: Singular Value Decomposition (SVD)
Principal Component Analysis
Calibration and homographies
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

PCA + SVD

Motivation – Shape Matching There are tagged feature points in both sets that are matched by the user What is the best transformation that aligns the unicorn with the lion?

Motivation – Shape Matching Regard the shapes as sets of points and try to “match” these sets using a linear transformation The above is not a good alignment….

Motivation – Shape Matching Alignment by translation or rotation The structure stays “rigid” under these two transformations Called rigid-body or isometric (distance-preserving) transformations Mathematically, they are represented as matrix/vector operations Before alignment After alignment

Transformation Math Translation Rotation Vector addition: Matrix product: x y

Transformation Math Eigenvectors and eigenvalues Let M be a square matrix, v is an eigenvector and λ is an eigenvalue if: If M represents a rotation (i.e., orthonormal), the rotation axis is an eigenvector whose eigenvalue is 1. There are at most m distinct eigenvalues for a m by m matrix Any scalar multiples of an eigenvector is also an eigenvector (with the same eigenvalue). Eigenvector of the rotation transformation O

Alignment Input: two models represented as point sets Source and target Output: locations of the translated and rotated source points Source Target

Alignment Method 1: Principal component analysis (PCA) Aligning principal directions Method 2: Singular value decomposition (SVD) Optimal alignment given prior knowledge of correspondence Method 3: Iterative closest point (ICP) An iterative SVD algorithm that computes correspondences as it goes

Method 1: PCA Compute a shape-aware coordinate system for each model Origin: Centroid of all points Axes: Directions in which the model varies most or least Transform the source to align its origin/axes with the target

Method 1: PCA Computing axes: Principal Component Analysis (PCA) Consider a set of points p1,…,pn with centroid location c Construct matrix P whose i-th column is vector pi – c 2D (2 by n): 3D (3 by n): Build the covariance matrix: 2D: a 2 by 2 matrix 3D: a 3 by 3 matrix

Method 1: PCA Computing axes: Principal Component Analysis (PCA) Eigenvectors of the covariance matrix represent principal directions of shape variation (2 in 2D; 3 in 3D) The eigenvectors are orthogonal, and have no magnitude; only directions Eigenvalues indicate amount of variation along each eigenvector Eigenvector with largest (smallest) eigenvalue is the direction where the model shape varies the most (least) Eigenvector with the smallest eigenvalue Eigenvector with the largest eigenvalue

Method 1: PCA Why do we look at the singular vectors? On the board… Input: 2-d dimensional points Output: 2nd (right) singular vector 2nd (right) singular vector: direction of maximal variance, after removing the projection of the data along the first singular vector. 1st (right) singular vector 1st (right) singular vector: direction of maximal variance,

Method 1: PCA Covariance matrix Goal: reduce the dimensionality while preserving the “information in the data” Information in the data: variability in the data We measure variability using the covariance matrix. Sample covariance of variables X and Y 𝑖 𝑥 𝑖 − 𝜇 𝑋 𝑇 ( 𝑦 𝑖 − 𝜇 𝑌 ) Given matrix A, remove the mean of each column from the column vectors to get the centered matrix C The matrix 𝑉 = 𝐶 𝑇 𝐶 is the covariance matrix of the row vectors of A.

Method 1: PCA We will project the rows of matrix A into a new set of attributes (dimensions) such that: The attributes have zero covariance to each other (they are orthogonal) Each attribute captures the most remaining variance in the data, while orthogonal to the existing attributes The first attribute should capture the most variance in the data For matrix C, the variance of the rows of C when projected to vector x is given by 𝜎 2 = 𝐶𝑥 2 The right singular vector of C maximizes 𝜎 2 !

Method 1: PCA Input: 2-d dimensional points Output: 2nd (right) singular vector 2nd (right) singular vector: direction of maximal variance, after removing the projection of the data along the first singular vector. 1st (right) singular vector 1st (right) singular vector: direction of maximal variance,

Method 1: PCA Singular values 2nd (right) singular vector 1: measures how much of the data variance is explained by the first singular vector. 2: measures how much of the data variance is explained by the second singular vector. 1st (right) singular vector 1

Method 1: PCA Singular values tell us something about the variance The variance in the direction of the k-th principal component is given by the corresponding singular value σk2 Singular values can be used to estimate how many components to keep Rule of thumb: keep enough to explain 85% of the variation:

Method 1: PCA Another property of PCA The chosen vectors are such that minimize the sum of square differences between the data vectors and the low- dimensional projections 1st (right) singular vector

Method 1: PCA Limitations Centroid and axes are affected by noise PCA result

Method 1: PCA Limitations Axes can be unreliable for circular objects Eigenvalues become similar, and eigenvectors become unstable Rotation by a small angle PCA result

Method 2: SVD Optimal alignment between corresponding points Assuming that for each source point, we know where the corresponding target point is.

Method 2: SVD Singular Value Decomposition   [n×m] = [n×r] [r×r] [r×m] r: rank of matrix A U,V are orthogonal matrices

Method 2: SVD Formulating the problem Source points p1,…,pn with centroid location cS Target points q1,…,qn with centroid location cT qi is the corresponding point of pi After centroid alignment and rotation by some R, a transformed source point is located at: We wish to find the R that minimizes sum of pair- wise distances: Solving the problem – on the board…

Method 2: SVD SVD-based alignment: summary Forming the cross-covariance matrix Computing SVD The optimal rotation matrix is Translate and rotate the source: Translate Rotate

Method 2: SVD Advantage over PCA: more stable As long as the correspondences are correct

Method 2: SVD Advantage over PCA: more stable As long as the correspondences are correct

Method 2: SVD Limitation: requires accurate correspondences Which are usually not available

Method 3: ICP The idea Iterative closest point (ICP) Use PCA alignment to obtain initial guess of correspondences Iteratively improve the correspondences after repeated SVD Iterative closest point (ICP) 1. Transform the source by PCA-based alignment 2. For each transformed source point, assign the closest target point as its corresponding point. Align source and target by SVD. Not all target points need to be used 3. Repeat step (2) until a termination criteria is met.

ICP Algorithm After PCA After 1 iter After 10 iter

ICP Algorithm After PCA After 1 iter After 10 iter

ICP Algorithm Termination criteria A user-given maximum iteration is reached The improvement of fitting is small Root Mean Squared Distance (RMSD): Captures average deviation in all corresponding pairs Stops the iteration if the difference in RMSD before and after each iteration falls beneath a user-given threshold

More Examples After PCA After ICP