Introduction Given a Matrix of distances D, (which contains zeros in the main diagonal and is squared and symmetric), find variables which could be able,

Slides:



Advertisements
Similar presentations
Chapter 4 Euclidean Vector Spaces
Advertisements

Eigen Decomposition and Singular Value Decomposition
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Covariance Matrix Applications
Latent Semantic Analysis
Lecture 3: A brief background to multivariate statistics
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Lecture 19 Singular Value Decomposition
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
Maths for Computer Graphics
Eigenvalues and eigenvectors
Symmetric Matrices and Quadratic Forms
Principal component analysis (PCA)
Proximity matrices and scaling Purpose of scaling Similarities and dissimilarities Classical Euclidean scaling Non-Euclidean scaling Horseshoe effect Non-Metric.
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Proximity matrices and scaling Purpose of scaling Similarities and dissimilarities Classical Euclidean scaling Non-Euclidean scaling Horseshoe effect Non-Metric.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Principal component analysis (PCA) Purpose of PCA Covariance and correlation matrices PCA using eigenvalues PCA using singular value decompositions Selection.
Proximity matrices and scaling Purpose of scaling Classical Euclidean scaling Non-Euclidean scaling Non-Metric Scaling Example.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
CE 311 K - Introduction to Computer Methods Daene C. McKinney
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Compiled By Raj G. Tiwari
Summarized by Soo-Jin Kim
Chapter 2 Dimensionality Reduction. Linear Methods
CS246 Topic-Based Models. Motivation  Q: For query “car”, will a document with the word “automobile” be returned as a result under the TF-IDF vector.
Some matrix stuff.
Linear Algebra (Aljabar Linier) Week 10 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
SVD: Singular Value Decomposition
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Linear algebra: matrix Eigen-value Problems
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
Matrices A matrix is a table or array of numbers arranged in rows and columns The order of a matrix is given by stating its dimensions. This is known as.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
SINGULAR VALUE DECOMPOSITION (SVD)
1 Matrix Algebra and Random Vectors Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Visualizing and Exploring Data 1. Outline 1.Introduction 2.Summarizing Data: Some Simple Examples 3.Tools for Displaying Single Variable 4.Tools for Displaying.
Ch 6 Vector Spaces. Vector Space Axioms X,Y,Z elements of  and α, β elements of  Def of vector addition Def of multiplication of scalar and vector These.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
Chapter 13 Discrete Image Transforms
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
1 Matrix Math ©Anthony Steed Overview n To revise Vectors Matrices.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
Introduction to Vectors and Matrices
CS479/679 Pattern Recognition Dr. George Bebis
Chapter 7 Matrix Mathematics
Matrix Operations SpringSemester 2017.
CS485/685 Computer Vision Dr. George Bebis
Introduction to Statistical Methods for Measuring “Omics” and Field Data PCA, PcoA, distance measure, AMOVA.
Numerical Analysis Lecture14.
Recitation: SVD and dimensionality reduction
Matrix Algebra and Random Vectors
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Numerical Analysis Lecture 17.
Lecture 13: Singular Value Decomposition (SVD)
Introduction to Vectors and Matrices
Matrix Operations SpringSemester 2017.
Eigenvalues and Eigenvectors
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Lecture 20 SVD and Its Applications
Eigenvectors and Eigenvalues
Presentation transcript:

Introduction Given a Matrix of distances D, (which contains zeros in the main diagonal and is squared and symmetric), find variables which could be able, approximately, to generate, these distances. The matrix can also be a similarities matrix, squared and symmetric but with ones in the main diagonal and values between zero and one elsewhere. Broadly: Distance (0 d 1) =1- similarity

Principal Coordinates (Metric Multidimensional Scaling) Given the D matrix of distances, Can we find a set of variables able to generate it ? Can we find a data matrix X able to generate D?

Main idea of the procedure: (1) To understand how to obtain D when X is known and given, (2) Then work backwards to build the matrix X given D

Procedure The first is the covariance matrix S The second is the Q matrix of scalar products among observations With this matrix we can compute two squared and symmetric matrices Remember that given a data matrix we have a zero mean data matrix by the transformation:

The matrix of products Q is closely related to the distance matrix, D, we are interested in. The relation between D and Q is as follows : Main result: Given the matrix Q we can obtain the matrix D Elements of Q: Elements of D:

How to recover Q given D? t =trace(Q) Note that as we have zero mean variables the sum of any row in Q must be zero

1. Method to recover Q given D

2. Obtain X given Q We cannot find exactly X because there will be many solutions to this problem. IF Q=XX’ also Q=X A A -1 X’ for any orthogonal matrix A. Thus B=XA is also a solution The standard solution: Make the spectral decomposition of the matrix Q Q=ABA’ Where A and B contain the non zero eigenvectors and eigenvalues of the matrix and take as solution X=AB 1/2 Note that:

Conclusion We say that D is compatible with an euclidean metric if Q obtained as Q=-(1/2)PDP is nonnegative (all eigenvalues non negative)

Summary of the procedure

Example 1.Cities

( Note that they add up to zero by rows and columns. The matrix has been divided by 10000)

Example 1 Eigenstructure of Q :

Final coordinates for the cities taking two dimensions:

Example 1. Plot

Similarities matrix

Example 2: similarity between products

Example 2

Relationship with PC PC: eigenvalues and vectors of S PCoordinates: eigenvalues and vectors of Q If the data are matric both are identical. P Coordinates generalizes PC for non exactly metric data

Biplots Representar conjuntamente los observaciones por las filas de V 2 y Las variables mediante las coordenadas D 2 /2 A’ 2 Se denimina biplots porque se hace una aproximación de dos dimensiones a la matriz de datos

Biplot

Non metric MS

A common method Idea: if we have a monotone relation between x and y it must be a linear exact relationship between the ranks of both variables Ordered regression or assign ranks and make a regression between ranks iterating