Procrustes analysis Purpose of procrustes analysis Algorithm Various modifications.

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Chapter 4 Systems of Linear Equations; Matrices Section 6 Matrix Equations and Systems of Linear Equations.
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Symmetric Matrices and Quadratic Forms
Procrustes analysis Purpose of procrustes analysis Algorithm R code Various modifications.
Principal component regression
From Greek Mythology to Modern Manufacturing: The Procrustes Problem By Dr. Dan Curtis Department of Mathematics Central Washington University.
Computer Graphics Recitation 5.
Factor Analysis Purpose of Factor Analysis Maximum likelihood Factor Analysis Least-squares Factor rotation techniques R commands for factor analysis References.
3D Geometry for Computer Graphics
Factor Analysis Purpose of Factor Analysis
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Principal component analysis (PCA)
Contingency tables and Correspondence analysis
Maximum likelihood (ML) and likelihood ratio (LR) test
Canonical correlations
Face Recognition Jeremy Wyatt.
Basics of discriminant analysis
Proximity matrices and scaling Purpose of scaling Similarities and dissimilarities Classical Euclidean scaling Non-Euclidean scaling Horseshoe effect Non-Metric.
3D Geometry for Computer Graphics
Ordinary least squares regression (OLS)
Contingency tables and Correspondence analysis Contingency table Pearson’s chi-squared test for association Correspondence analysis using SVD Plots References.
Proximity matrices and scaling Purpose of scaling Similarities and dissimilarities Classical Euclidean scaling Non-Euclidean scaling Horseshoe effect Non-Metric.
Linear and generalised linear models
Ch. 10: Linear Discriminant Analysis (LDA) based on slides from
Principal component analysis (PCA)
Linear and generalised linear models
Basics of regression analysis
Principal component analysis (PCA) Purpose of PCA Covariance and correlation matrices PCA using eigenvalues PCA using singular value decompositions Selection.
Proximity matrices and scaling Purpose of scaling Classical Euclidean scaling Non-Euclidean scaling Non-Metric Scaling Example.
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Statistical Shape Models Eigenpatches model regions –Assume shape is fixed –What if it isn’t? Faces with expression changes, organs in medical images etc.
Maximum likelihood (ML)
Separate multivariate observations
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Point set alignment Closed-form solution of absolute orientation using unit quaternions Berthold K. P. Horn Department of Electrical Engineering, University.
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Chapter 9 Superposition and Dynamic Programming 1 Chapter 9 Superposition and dynamic programming Most methods for comparing structures use some sorts.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Multivariate Statistics Matrix Algebra II W. M. van der Veld University of Amsterdam.
4.6 Matrix Equations and Systems of Linear Equations In this section, you will study matrix equations and how to use them to solve systems of linear equations.
Unit 6 : Matrices.
Orthogonality and Least Squares
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
SINGULAR VALUE DECOMPOSITION (SVD)
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Principle Component Analysis and its use in MA clustering Lecture 12.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Unsupervised Learning II Feature Extraction
Dimension reduction (2) EDR space Sliced inverse regression Multi-dimensional LDA Partial Least Squares Network Component analysis.
6 6.5 © 2016 Pearson Education, Ltd. Orthogonality and Least Squares LEAST-SQUARES PROBLEMS.
Principal component analysis (PCA)
Chapter 7 Matrix Mathematics
Derivative of scalar forms
Principal Component Analysis
Matrices and Matrix Operations
~ Least Squares example
Feature space tansformation methods
Symmetric Matrices and Quadratic Forms
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Procrustes analysis Purpose of procrustes analysis Algorithm Various modifications

Purpose of procrustes analysis There are many situations when different techniques produce different configurations. For example when metric and non-metric scaling are used then different configurations may be generated. Even if metric scaling is used, different proximity (dissimilarity) matrices can produce different configurations. Since these techniques are used for the same multivariate observations each observation in one configuration corresponds to exactly one observation in another one. Most of the techniques produce configuration that is rotationally undefined. Scores in factor analysis can also be considered as one of the possible configurations. There are other situations when comparison of configurations is needed. For example in macromolecular biology 3-dimensional structures of different proteins are derived using some experimental technique. One of the interesting question is if two different proteins are similar. If they are what is the similarity between them. To find similarity it is necessary to match configurations of two protein structures. All these questions can be addressed using procrustes analysis. Suppose we have two configurations X=(x 1,x 2,,,x n ) and Y = (y 1,y 2,,,y n ). where each x and y are vectors in p dimensional space. We want to find an orthogonal matrix A and a vector b so that:

Prcucrustes analysis: vector and matrix It can be show that finding translation (b) and rotation matrix (A) can be considered separately. Translation can easily be found if we centre each configuration. If rotation is already known then we can find translation. Let us denote z i =Ay i +b. Then we can write: It is minimised when centres of x and z coincide. I.e. We want centroids of the configuration to match. It can be done if we will subtract from x and y their respective centroids. Remaining problem is finding the orthogonal matrix (matrix of rotation or inverse). We can write: Here we used the fact that A is an orthogonal matrix: Then we want to perform constrained maximisation: We can do it using Lagrange’s multipliers technique.

Rotation matrix using SVD Let us define symmetric matrix of the constraints by 1/2 . Then we want to maximise: If we get derivatives of this expression wrt to matrix A and equate them to 0 then we can get: Here we used the following facts: and remembering that the matrix of the constraints is symmetric. We have necessary linear equations to find the required orthogonal matrix. Let us use SVD of Y T X: V and U are pxp orthogonal matrices. D is the diagonal matrix of the singular values.

Rotation matrix and SVD If we use the fact that A is orthogonal then we can write: and It gives the solution for the rotation (orthogonal) matrix. Now we can calculate least-squares differences between configurations: Thus we have the expressions for the rotation matrix and differences between configurations after matching. It is interesting to note that to find differences between configurations it is not necessary rotate them. This expression can also be written: One more useful expression is: This expression shows that it is even not necessary to do SVD to find differences between configurations. (For square root of a matrix Cholesky decomposition could be used)

Some modifications There are some situations where problems may occur: 1)Dimensions of configurations can be different. There are two ways of handling this problem. First way is to fill low dimensional (k) space with 0-s and make it high (p) dimensional. This way we assume that the first k dimensions coincide. Here we assume that k-dimensional configuration is in the k-dimensional subspace of p-dimensional space. Second way is to collapse high dimensional configuration to low dimensional one. For this we need to project p-dimensional configuration to k-dimensional space. 2)Second problem is when the scales of the configurations are different. In this case we can add scale factor to the function we want to minimise: If we find orthogonal matrix as before then we can find expression for the scale factor: As a result M is no longer symmetric wrt X and Y. 3) Sometimes it is necessary to weight some variables down and others up. In this case procrustes analysis can be performed using weights. We want to minimise the function: This modification can be taken into account. Analysis becomes easy when weight matrix is diagonal.

References 1)Krzanowski WJ and Marriout FHC. (1994) Multivatiate analysis. Kendall’s library of statistics 2)Mardia, K.V. Kent, J.T. and Bibby, J.M. (2003) Multivariate analysis