Procrustes analysis Purpose of procrustes analysis Algorithm R code Various modifications.

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Chapter 4 Systems of Linear Equations; Matrices Section 6 Matrix Equations and Systems of Linear Equations.
Matrices: Inverse Matrix
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Motion Analysis Slides are from RPI Registration Class.
From Greek Mythology to Modern Manufacturing: The Procrustes Problem By Dr. Dan Curtis Department of Mathematics Central Washington University.
Computer Graphics Recitation 5.
Factor Analysis Purpose of Factor Analysis Maximum likelihood Factor Analysis Least-squares Factor rotation techniques R commands for factor analysis References.
3D Geometry for Computer Graphics
Factor Analysis Purpose of Factor Analysis
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Ch 7.9: Nonhomogeneous Linear Systems
Principal component analysis (PCA)
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Procrustes analysis Purpose of procrustes analysis Algorithm Various modifications.
Ch. 4: Velocity Kinematics
Canonical correlations
Chapter 2 Matrices Definition of a matrix.
Proximity matrices and scaling Purpose of scaling Similarities and dissimilarities Classical Euclidean scaling Non-Euclidean scaling Horseshoe effect Non-Metric.
3D Geometry for Computer Graphics
Proximity matrices and scaling Purpose of scaling Similarities and dissimilarities Classical Euclidean scaling Non-Euclidean scaling Horseshoe effect Non-Metric.
Linear and generalised linear models
Ch. 10: Linear Discriminant Analysis (LDA) based on slides from
Principal component analysis (PCA)
Linear and generalised linear models
Basics of regression analysis
Principal component analysis (PCA) Purpose of PCA Covariance and correlation matrices PCA using eigenvalues PCA using singular value decompositions Selection.
Proximity matrices and scaling Purpose of scaling Classical Euclidean scaling Non-Euclidean scaling Non-Metric Scaling Example.
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Separate multivariate observations
1 Chapter 2 Matrices Matrices provide an orderly way of arranging values or functions to enhance the analysis of systems in a systematic manner. Their.
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
A vector can be interpreted as a file of data A matrix is a collection of vectors and can be interpreted as a data base The red matrix contain three column.
Some matrix stuff.
Epipolar geometry The fundamental matrix and the tensor
Day 1 Eigenvalues and Eigenvectors
Day 1 Eigenvalues and Eigenvectors
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Multivariate Statistics Matrix Algebra II W. M. van der Veld University of Amsterdam.
4.6 Matrix Equations and Systems of Linear Equations In this section, you will study matrix equations and how to use them to solve systems of linear equations.
Chapter 4 Matrices By: Matt Raimondi.
4.1 Matrix Operations What you should learn: Goal1 Goal2 Add and subtract matrices, multiply a matrix by a scalar, and solve the matrix equations. Use.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
Matrices Addition & Subtraction Scalar Multiplication & Multiplication Determinants Inverses Solving Systems – 2x2 & 3x3 Cramer’s Rule.
Unit 6 : Matrices.
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
Matrix Differential Calculus By Dr. Md. Nurul Haque Mollah, Professor, Dept. of Statistics, University of Rajshahi, Bangladesh Dr. M. N. H. MOLLAH.
SINGULAR VALUE DECOMPOSITION (SVD)
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Introduction to Matrices and Matrix Approach to Simple Linear Regression.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Relations, Functions, and Matrices Mathematical Structures for Computer Science Chapter 4 Copyright © 2006 W.H. Freeman & Co.MSCS Slides Relations, Functions.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Matrices. Matrix - a rectangular array of variables or constants in horizontal rows and vertical columns enclosed in brackets. Element - each value in.
Lecture XXVI.  The material for this lecture is found in James R. Schott Matrix Analysis for Statistics (New York: John Wiley & Sons, Inc. 1997).  A.
6 6.5 © 2016 Pearson Education, Ltd. Orthogonality and Least Squares LEAST-SQUARES PROBLEMS.
Review of Matrix Operations
Least Squares Approximations
Principal Component Analysis
Feature space tansformation methods
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Presentation transcript:

Procrustes analysis Purpose of procrustes analysis Algorithm R code Various modifications

Purpose of procrustes analysis In general, slightly different dimension reduction techniques produce different configurations. For example when metric and non-metric scaling are used then different configurations may be generated. Moreover metric scaling with different proximity matrices can produce different configurations. Most of the techniques produce configuration that rotationally undefined. Since these techniques are used for the same multivariate observations each observation in one configuration corresponds to exactly one observation in another one. In these cases it is interesting to compare the results with each other and if the original data matrix is available then to compare with them also. There are other situations when comparison of configurations is needed. For example in macromolecular biology 3-dimensional structures of different proteins are derived. One of the interesting question is if two proteins are similar. If they are what is the similarity between them. That is the problem of configuration matching. All these questions can be addressed using procrustes analysis. Procrustes analysis finds the best match between two configurations, necessary rotation matrix and translation vector for the match and distances between configurations.

Procrustes analysis: problem Suppose we have two configurations (data matrices) X=(x 1,x 2,,,x n ) and Y = (y 1,y 2,,,y n ). where x-s and y-s are vectors in p dimensional space. We want to find an orthogonal matrix A and a vector b so that: It can be shown that finding translation (b) and rotation matrix (A) can be considered separately. Translation can easily be found if we centre each configuration. If the rotation is already known then we can find translation. Let us denote z i =Ay i +b. Then we can write: Since only the third term depend on the translation vector this function is minimised when the third term is equal 0. Thus: The first step in procrustes analysis is to centre matrices X and Y and remember the mean values of the columns of X and Y.

Prcucrustes analysis: matrix Once we have subtracted from each column their corresponding mean, the remaining problems is to find the orthogonal matrix (matrix of rotation or inverse). We can write: Here we used the fact that under trace operator circular permutation of matrices is valid and A is an orthogonal matrix: Since in the expression of M 2 only the last term is dependent on A, the problem reduces to constrained maximisation: It can be done using Lagrange’s multipliers technique.

Rotation matrix using SVD Let us define a symmetric matrix of constraints by 1/2 . Then we want to maximise: If we get the first derivatives of this expression wrt to matrix A and equate them to 0 then we get: (1) Here we used the following facts: and the fact that the matrix of the constraints is symmetric. Thus we have necessary linear equations to find the required orthogonal matrix. To solve the equation (1) let us use SVD of Y T X: V and U are pxp orthogonal matrices. D is the diagonal matrix of the singular values.

Rotation matrix and SVD If we use the fact that A is orthogonal then we can write: and It gives the solution for the rotation (orthogonal) matrix. Now we can calculate least- squares distance between two configurations: Thus we have the expressions for rotation matrix and distances between configurations after matching. It is interesting to note that to find the distance between configurations it is not necessary to rotate one of them. One more useful expression is: This expression shows that it is even not necessary to do SVD to find distance between configurations.

Algorithm Problem: Given two configuration matrices X and Y with the same dimensions, find rotation and translation that would bring Y as close as possible to X. 1)Find the centroids (mean values of columns) of X and Y. Call them xmean and ymean. 2)Remove from each column corresponding mean. Call the new matrices Xn and Yn 3)Find (Yn) T( Xn). Find the SVD of this matrix (Yn) T( Xn) = UDV T 4)Find the rotation matrix A = UV T. That is the rotation matrix 5)Find the translation vector b = xmean - A ymean. It is the required translation vector 6)Find the distance between configurations: d 2 =tr((Xn) T( Xn))+tr((Yn) T( Yn))-2tr(D). That is the square of the required distance

R code procrustes = function(X,Y){ # Simple procrustes analysis. rmmean and tr are other functions needed # x1= rmmean(X) y1 = rmmean(Y) Xn = x1$matr Yn = y1$matr xmean = x1$mean ymean = y1$mean rm(x1);rm(y1) s = svd(crossprod(Yn,Xn)) A = s$u%*%t(s$v) d=sqrt(tr(crossprod(Xn,Xn)+crossprod(Yn,Yn))-2*sum(s$d) b = xmean-A*ymean list(matr=A,trans=b,dist=d) }

Some modifications There are some situations where straightforward use of procrustes analysis may not be appropriate: 1)Dimensions of configurations can be different. There are two ways of handling this problem. The first way is to fill low dimensional (k) space with 0-s and make it high (p) dimensional. This way we assume that first k dimensions coincide. Here we assume that k- dimensional configuration is in the k-dimensional subspace of p-dimensional space. Second way is to collapse high dimensional configuration to low dimensional space. For this we need to project p-dimensional configuration to k-dimensional space. 2)Second problem is when the scales of the configurations are different. In this case we can add scale factor to the function we minimise: If we find orthogonal matrix as before then we can find expression for the scale factor: As a result distance between configuration M is no longer symmetric wrt X and Y. 3) Sometimes it is necessary to weight some variables down and others up. In these cases procrustes analysis can be performed using weights. We want to minimise the function: This modification can be taken into account if we find SVD of Y T WX instead of Y T X