METHODS OF TRANSFORMING NON-POSITIVE DEFINITE CORRELATION MATRICES

Slides:



Advertisements
Similar presentations
4.1 Introduction to Matrices
Advertisements

Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad.
METHODS OF TRANSFORMING NON-POSITIVE DEFINITE CORRELATION MATRICES Katarzyna Wojtaszek student number CROSS.
1 Copyright © 2015, 2011, 2007 Pearson Education, Inc. Chapter 4-1 Systems of Equations and Inequalities Chapter 4.
Linear Transformations
Symmetric Matrices and Quadratic Forms
Ch 7.9: Nonhomogeneous Linear Systems
5. Topic Method of Powers Stable Populations Linear Recurrences.
LIAL HORNSBY SCHNEIDER
Recall that a square matrix is one in which there are the same amount of rows as columns. A square matrix must exist in order to evaluate a determinant.
Matrix Solution of Linear Systems The Gauss-Jordan Method Special Systems.
1 © 2010 Pearson Education, Inc. All rights reserved © 2010 Pearson Education, Inc. All rights reserved Chapter 9 Matrices and Determinants.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Class Opener:. Identifying Matrices Student Check:
Diagonalization and Similar Matrices In Section 4.2 we showed how to compute eigenpairs (,p) of a matrix A by determining the roots of the characteristic.
What is the determinant of What is the determinant of
Boyce/DiPrima 9 th ed, Ch 11.3: Non- Homogeneous Boundary Value Problems Elementary Differential Equations and Boundary Value Problems, 9 th edition, by.
7 7.2 © 2016 Pearson Education, Ltd. Symmetric Matrices and Quadratic Forms QUADRATIC FORMS.
Table of Contents Matrices - Definition and Notation A matrix is a rectangular array of numbers. Consider the following matrix: Matrix B has 3 rows and.
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
Estimating standard error using bootstrap
Coordinate Transformations
Answer the FRONT of the worksheet that was passed out yesterday!
Linear Algebra Review.
13.4 Product of Two Matrices
Correlation I have two variables, practically „equal“ (traditionally marked as X and Y) – I ask, if they are independent and if they are „correlated“,
L9Matrix and linear equation
FIRST ORDER DIFFERENTIAL EQUATIONS
Advanced Numerical Methods (S. A. Sahu) Code: AMC 51151
Properties Of the Quadratic Performance Surface
Multiplying Matrices.
Matrix Algebra.
Chinese Multiplication
The regression model in matrix form
Multiplying Matrices.
Bayesian belief networks 2. PCA and ICA
Numerical Analysis Lecture 16.
Dr Huw Owens Room B44 Sackville Street Building Telephone Number 65891
Matrices and Linear Transformations
Find the area of the Triangle
Matrix Algebra and Random Vectors
Further Matrix Algebra
Matrices.
Warm-Up 3) 1) 4) Name the dimensions 2).
Solving Linear Equations
Sampling Distributions
Symmetric Matrices and Quadratic Forms
Multiplying Matrices.
Factor Analysis BMTRY 726 7/19/2018.
SKTN 2393 Numerical Methods for Nuclear Engineers
Matrix Algebra.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 9, Friday 28th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Linear Algebra Lecture 29.
Speed = Distance divided by time
Matrices.
Matrix A matrix is a rectangular arrangement of numbers in rows and columns Each number in a matrix is called an Element. The dimensions of a matrix are.
Matrices are identified by their size.
Multiplying Matrices.
Eigenvalues and Eigenvectors
Correspondence Analysis
Multiplying Matrices.
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Symmetric Matrices and Quadratic Forms
Multiplying Matrices.
Presentation transcript:

METHODS OF TRANSFORMING NON-POSITIVE DEFINITE CORRELATION MATRICES Katarzyna Wojtaszek student number 1118676 CROSS

I will try to answer questions:  How can I estimate correlation matrix when I have data?   What can I do if matrices are non-PD? Shrinking method Eigenvalues method Vines method  How can we calculate distances between original and transformed matrices? Which method is the best? comparing conclusions

How can I estimate correlation matrix if I have data?   I can estimate the correlation matrices from data as follows: 1. I can estimate each off-diagonal element separately

2. I can also estimate whole data together: with i=1,…,s ; j=1,…,n

What can I do when matrices are non- PD?  We can use some methods for transforming these matrices to PD correlation matrices using: Shrinking method Eigenvalues method Vines method

How can we calculate distances between original and transformed matrices?   There are many methods which we can use to calculate the distance between matrices . In my project I used formula:

1. SHRINKING METHOD Assumptions:    linear shrinking Assumptions: Rnxn is given non-PD pseudo correlation matrix is arbitrary correlation matrix Define:  ( [0,1]) =R+ (R* - R) is a pseudo correlation matrix.

Idea: find the smallest such that matrix will be PD. Since R is non-PD then the smallest eigenvalue  of R is negative , so we have to choose such that will be positive. Hence: And  0 if  - / (*-). So we find matrix which is PD matrix given non-PD matrix R.

non-linear shrinking Assumption: Rnxn is given non-PD pseudo correlation matrix Procedure: where f is strictly increasing odd function with f(0)=0 and >0.

I considered the following four functions:    

Comparison of the linear and non-linear shrinking methods Rnxn SET OF PD-MATRICES Linear shrinking In

P -orthogonal matrix such that R=PDPT  2.THE EIGENVALUE METHOD.  Assumptions: Rnxn non-PD pseudo correlation matrix P -orthogonal matrix such that R=PDPT D matrix which the eigenvalues of R on the diagonal  is some constant  0

= where is a diagonal matrix Idea: Replaced negative values in matrix D by . We obtain: R*=PD*PT = where is a diagonal matrix with diagonal elements equal for i=1,2,…,n.

3.VINES METHOD. Rnxn pseudo correlation matrix Idea:  Assumptions: Rnxn pseudo correlation matrix Idea: First we have to check if our matrix is PD

If some (-1,1) we change the value V( ) (-1,1)) and recalculate partial correlation using: V( ) =V( ) + We obtain new matrix , witch we have check again.

Let say that we have matrix R4x4 Example Let say that we have matrix R4x4 Very useful is making graphical model 1 2 4 3

Which method is the best? Comparing. Using Matlab I chose randomly 500 non-PD matrices, transformed them and calculated the average distances between non-PD and PD matrices. This table shows us my results. n 3 4 5 6 7 8 9 10 Lin. shrinking 2.7868 4.371 6.7233 9.8977 14.0027 18.4047 23.7102 29.6013 Shrinking f1 0.1388 0.4028 1.1251 2.5161 4.3623 6.76 9.8484 13.8416 Shrinking f2 0.2756 0.9696 2.382 4.6464 8.1327 11.4816 16.3835 20.5501 Shrinking f3 0.1441 0.4589 1.1432 2.5153 4.4483 6.9127 10.176 13.7543 Shrinking f4 0.4091 1.4379 3.3365 5.7357 8.6839 11.7034 15.686 18.9959 Eigenvalues 0.0861 0.2039 0.451 0.913 1.5799 2.3263 3.3845 4.7033 Vines 0.2285 1.2999 3.3251 6.6395 11.3295 17.813 24.7021 34.4963

ILUSTATION: average distance

Conclusions: The reason that the linear shrinking is very bad method is that we shrink all elements by the same relative amount The eigenvalues method performes fast and gives very good results regardless matrices dimensions For the non-linear shrinking method the best choice of the projection function are and