Download presentation
Published bySilvia Page Modified over 9 years ago
1
Quadratic Forms, Characteristic Roots and Characteristic Vectors
Mohammed Nasser Professor, Dept. of Statistics, RU,Bangladesh The use of matrix theory is now widespread are essential in modern treatment of univeriate and multivariate statistical methods. C.R.Rao 1 DRRSDM 1
2
Linear Map and Matrices
Contents Linear Map and Matrices Quadratic Forms and Its Applications in MM Classification of Quadratic Forms Quadratic Forms and Inner Product Definitions of Characteristic Roots and Characteristic Vectors Geometric Interpretations Properties of Grammian Matrices Spectral Decomposition and Applications Matrix Inequalities and Maximization Computations 2
3
Relation between MM (ML) and Vector space
Statistical Concepts/Techniques Concepts in Vector space Variance Length of a vector, Qd. forms Covariance Dot product of two vectors Correlation Angle bt.two vectors Regression and Classification Mapping bt two vector sp. PCA/LDA/CCA Orthogonal/oblique projection on lower dim.
4
Some Vector Concepts Dot product = scalar Length of a vector x2
|| x || = (x12+ x22 + x32 )1/2 Inner product of a vector with itself = (vector length)2 xT x =x12+ x22 +x32 = (|| x ||)2 x1 x2 ||x|| Right-angle triangle Pythagoras’ theorem Length of a vector x2 || x || = (x12+ x22)1/2 x1
5
Some Vector Concepts Angle between two vectors ||x|| x
b q ||x|| ||y|| y2 y1 x Orthogonal vectors: xT y = 0 x y =/2
6
Linear Map and Matrices
Linear mappings are almost omnipresent If both domain and co-domain are both finite-dimensional vector space, each linear mapping can be uniquely represented by a matrix w.r.t. specific couple of bases We intend to study properties of linear mapping from properties of its matrix
7
Linear map and Matrices This isomorphism is basis dependent
8
Linear map and Matrices
Let A be similar to B, i.e. B=P-1AP Similarity defines an equivalent relation in the vector space of square matrices of orde n, i.e. it partitions the vector space in to different equivalent classes. Each equivalent class represents unique linear operator How can we choose the simplest one in each equivalent class and ii) The one of special interest ??
9
Linear map and Matrices
Two matrices representing the same linear transformation with respect to different bases must be similar. A major concern of ours is to make the best choice of basis, so that the linear operator with which we are working will have a representing matrix in the chosen basis that is as simple as possible. A diagonal matrix is a very useful matrix, for example, Dn=P-1AnP
10
Linear map and Matrices
Each equivalent class represent unique linear operator Can we characterize the class in simpler way? Yes, we can Under extra conditions The concept , characteristic roots plays an important role in this regards
11
Quadratic Form Definition: The quadratic form in n variables x1, x2, …, xn is the general homogeneous function of second degree in the variables In terms of matrix notation, the quadratic form is given by
12
Examples of Some Quadratic Forms
1 2. 3. Standard form What is its uses?? A can be many for a particular quadratic form. To make it unique it is customary to write A as symmetric matrix.
13
In Fact Infinite A’s Symmetric A
For example 1 we have to take a12, and a21 s.t.a12 +a21 =6. We can do it in infinite ways. Symmetric A
14
Its Importance in Statistics
Variance is a fundamental concept in statistics. It is nothing but a quadratic form with a idempotent matrix of rank (n-1) Quadratic forms play a central role in multivariate statistical analysis. For example, principal component analysis, factor analysis, discriminant analysis etc. 14
15
Multivariate Gaussian
Its Importance in Statistics Multivariate Gaussian 15
16
Its Importance in Statistics
Bivariate Gaussian 16
17
Spherical, diagonal, full covariance
UM 17
18
Quadratic Form as Inner Product
Length ofY, ||Y||= (YTY)1/2; XTY, dot product of X andY Let A=CTC. Then XT AX= XTCTCX=(CX)TCX=YTY XT AY= XTCTCY=(CX)TCY=WT Z What is its geometric meaning ? XT AX=(AT X)TX = XT (AX) = XT Y Different nonsingular Cs represent different inner products Different inner products different geometries.
19
Euclidean Distance and Mathematical Distance
Usual human concept of distance is Eucl. Dist. Each coordinate contributes equally to the distance Mathematicians, generalizing its three properties , 1) d(P,Q)=d(Q,P).2) d(P,Q)=0 if and only if P=Q and 3) d(P,Q)=<d(P,R)+d(R,Q) for all R, define distance on any set.
20
Statistical Distance Weight coordinates subject to a great deal of variability less heavily than those that are not highly variable Who is nearer to data set if it were point?
21
Statistical Distance for Uncorrelated Data
22
Ellipse of Constant Statistical Distance for Uncorrelated Data
x2 x1
23
Scattered Plot for Correlated Measurements
24
Statistical Distance under Rotated Coordinate System
25
General Statistical Distance
26
Necessity of Statistical Distance
27
Mahalonobis Distance Population version: Sample veersion;
We can robustify it using robust estimators of location and scatter functional
28
Classification of Quadratic Form
Chart: 28
29
Classification of Quadratic Form Definitions
1. Positive Definite: A quadratic form Y=XTAX is said to be positive definite iff Y=XTAX>0 ; for all x≠0 . Then the matrix A is said to be a positive definite matrix. 2. Positive Semi-definite:A quadratic form, Y=XTAX is said to be positive semi-definite iff Y=XTAX>=0 , for all x≠0 and there exists x≠0 such that XTAX= Then the matrix A is said to be a positive semidefinite matrix. 3. Negative Definite: A quadratic form Y=XTAX is said to be negative definite iff Y=XTAX<=0 for all x≠0. Then the matrix A is said to be negative definite matrix 29
30
Classification of Quadratic Form Definitions
4. Negative Semi-definite: A quadratic form, is said to be negative semi-definite iff , for all x≠0 and there exists x≠0 such that The matrix A is said to be a negative semi-definite matrix. Indefinite: Quadratic forms and their associated symmetric matrices need not be definite or semi-definite in any of the above scenes. In this case the quadratic form is said to be indefinite; that is , it can be negative, zero or positive depending on the values of x. 30
31
Two Theorems On Quadratic Form
Theorem(1): A quadratic form can always be expressed with respect to a given coordinate system as . where A is a unique symmetric matrix. Theorem2: Two symmetric matrices A and B represent the same quadratic form if and only if B=PTAP where P is a non-singular matrix.
32
Importance of Standard Form
Classification of Quadratic Form Importance of Standard Form From standard form we can easily classify a quadratic form. XT AX= Is positive /positive semi/negative/ negative semidifinite/indefinite if ai >0 for all i/ ai >0 for some i others, a=0/ai <0 for all i,/ ai <0 some i , others, a=0/ Some ai are +ive, some are negative.
33
Importance of Standard Form
Classification of Quadratic Form Importance of Standard Form That is why using suitable nonsingular trandformation ( why nonsingular??) we try to transform general XT AX into a standard form. If we can find a P nonsingular matrix s.t. we can easily classify it. We can do it i) for congruent transformation and ii) using eigenvalues and eigen vectors. Method 2 is mostly used in MM
34
Importance of Determinant, Eigen Values and Diagonal Element
Classification of Quadratic Form Importance of Determinant, Eigen Values and Diagonal Element 1. Positive Definite: (a). A quadratic form is positive definite iff the nested principal minors of A is given as Evidently a matrix A is positive definite only if det(A)>0 (b). A quadratic form Y=XTAX be positive definite iff all the eigen values of A are positive.
35
Importance of Determinant, Eigen Values and Diagonal Element
Classification of Quadratic Form Importance of Determinant, Eigen Values and Diagonal Element 2. Positive Semi-definite:(a) A quadratic form is positive semi-definite iff the nested principal minors of A is given as (b). A quadratic form Y=XTAX is positive semi-definite iff at least one eigen value of A is zero while the remaining roots are positive.
36
Continued 3. Negative Definite: (a). A quadratic form is negative definite iff the nested principal minors of A are given as Evidently a matrix A is negative definite only if (-1)n× det(A)>0; where det(A) is either negative or positive depending on the order n of A. (b). A quadratic form Y=XTAX be negative definite iff all the eigen Roots of A are negative.
37
Continued 4. Negative Semi-definite:(a)A quadratic form is negative semi-definite iff the nested principal minors of A is given as Evidently a matrix A is negative semi-definite only if ,that is, det(A)≥0 ( det(A)≤0 ) when n is odd( even). (b). A quadratic form is negative semi-definite iff at least one eigen value of A is zero while the remaining roots are negative.
38
Theorem on Quadratic Form (Congruent Transformation)
If is a real quadratic form of n variables x1, x2, …, xn and rank r i.e. ρ(A)=r then there exists a non-singular matrix P of order n such that x=Pz will convert Y in the canonical form where λ1, λ2, …, λr are all the different from zero. That implies
39
Grammian (Gram)Matrix
Grammian Matrix -----If A be n×m matrix then the matrix S=ATA is called grammian matrix of A. If A is m×n then S=ATA is a symmetric n-rowed matrix. Properties Every positive definite or positive semi-definite matrix can be represented as a Grammian matrix The Grammian matrix ATA is always positive definite or positive semi-definite according as the rank of A is equal to or less than the number of columns of A c. d. If ATA=0 then A=0
40
A(x) = (Ax) = (x) = (x)
What are eigenvalues? Given a matrix, A, x is the eigenvector and is the corresponding eigenvalue if Ax = x A must be square and the determinant of A - I must be equal to zero Ax - x = 0 ! (A - I) x = 0 Trivial solution is if x = 0 The non trivial solution occurs when det(A - I) = 0 Are eigenvectors are unique? If x is an eigenvector, then x is also an eigenvector and is an eigenvalue of A, A(x) = (Ax) = (x) = (x)
41
Calculating the Eigenvectors/values
Expand the det(A - I) = 0 for a 2 × 2 matrix For a 2 × 2 matrix, this is a simple quadratic equation with two solutions (maybe complex) This “characteristic equation” can be used to solve for x
42
Eigenvalue example Consider,
The corresponding eigenvectors can be computed as For = 0, one possible solution is x = (2, -1) For = 5, one possible solution is x = (1, 2)
43
Geometric Interpretation of eigen roots and vectors
We know from the definition of eigen roots and vectors Ax = λx; (**) where A is m×m matrix, x is m tuples vector and λ is scalar quantity. From the right side of (**) we see that the vector is multiplied by a scalar. Hence the direction of x and λx is on the same line. The left side of(**)shows the effect of matrix multiplication of matrix A (matrix operator) with vector x. But matrix operator may change the direction and magnitude of the vector.
44
Geometric Interpretation of eigen roots and vectors
Hence our goal is to find such kind of vectors that change in magnitude but remain on the same line after matrix multiplication. Now the question arises: does these eigen vectors along with their respective change in magnitude characterize the matrix? Answer is the DECOMPOSITION THEOREMS
45
Geometric Interpretation of eigen Roots and Vectors
Y Y [A] Ax2 x1 x2 Ax1 X X Z Z
46
More to Notice
47
Properties of Eigen Values and Vectors
If B=CAC-1, where A, B and C are all n×n then A and B have same eigen roots. If x is the eigen Vector of A then Cx for B The eigen roots of A and AT are same. A eigen Vector x≠o can not be associated with more than one eigen Root The eigen Vectors of a matrix A are linearly independent if they corresponds to distinct roots. Let A be a square matrix of order m and suppose all its roots are distinct. Then A is similar to a diagonal matrix Λ,i.e. P-1AP= Λ. eigen Roots and vectors are all real for any real symmetric matrix, A If λi and λj are two distinct roots of a real symmetric matrix A, then vectors xi and xj are orthogonal 47
48
Properties of Eigen Values and Vectors
If λ1, λ2, … , λm are the eigen roots of the non-singular matrix A then λ1-1, λ2-1, … , λm-1 are the eigen roots of A-1. Let A, B be two square matrices of order m. Then the eigen roots of AB are exactly the eigen roots of BA. Let A, B be respectively m×n and n×m matrices, where m≤n. Then the eigen Roots of (BA)n×n consists of n-m zeros and the m eigen Roots of (AB)m×m. 48
49
Properties of Eigen Values and Vectors
Let A be a square matrix of order m and λ1, λ2, … , λm be its eigen Roots then Let A be a m×m matrix with eigen Roots λ1, λ2, … , λm then tr(A) = tr(Λ) = λ1+ λ2+ … + λm . If A has eigen Roots λ1, λ2, … , λm then A-kI has eigen Roots λ1-k, λ2-k, … , λm-k and kA has the eigen Roots kλ1, kλ2, … , kλm , where k is scalar. If A is an orthogonal matrix then all its eigen Roots have absolute value 1. Let A be a square matrix of order m; suppose further that A is idempotent. Then its eigen Roots are either 0 or 1. 49
51
Eigen/diagonal Decomposition
Let be a square matrix with m linearly independent eigenvectors (a “non-defective” matrix) Theorem: Exists an eigen decomposition (cf. matrix diagonalization theorem) Columns of U are eigenvectors of S Diagonal elements of are eigenvalues of Unique for distinct eigen-values diagonal
52
Diagonal decomposition: why/how
Let U have the eigenvectors as columns: Then, SU can be written Thus SU=U, or U–1SU= And S=UU–1. UM
53
Diagonal decomposition - example
Recall The eigenvectors and form Recall UU–1 =1. Inverting, we have Then, S=UU–1 = UM
54
Example continued Let’s divide U (and multiply U–1) by Then, S= Q
(Q-1= QT ) Why? …
55
Symmetric Eigen Decomposition
If is a symmetric matrix: Theorem: Exists a (unique) eigen decomposition where Q is orthogonal: Q-1= QT Columns of Q are normalized eigenvectors Columns are orthogonal. (everything is real)
56
Spectral Decomposition theorem
If A is a symmetric m ×m matrix with i and ei, i = 1 m being the m eigenvector and eigenvalue pairs, then This is also called the eigen( spectral) decomposition theorem Any symmetric matrix can be reconstructed using its eigenvalues and eigenvectors
57
Example for Spectral Decomposition
Let A be a symmetric, positive definite matrix The eigenvectors for the corresponding eigenvalues are Consequently,
58
The Square Root of a Matrix
The spectral decomposition allows us to express a square matrix in terms of its eigenvalues and eigenvectors. This expression enables us to conveniently create a square root matrix. A is a p x p positive definite matrix with the spectral decomposition: If P = [ e1 e2 e3 … ep] where P’P = PP’ = I and L = diag(li).
59
The Square Root of a Matrix
Let This implies (PL1/2P’)PL1/2P’ = PL1/2L1/2P’ = (PLP’) The matrix Is called he square root of A
60
Matrix Square Root Properties
The square root of A has the following properties(prove them):
61
Physical Interpretation of SPD (Spectral Decomposition)
Suppose xT Ax = c2. For p = 2, all x that satisfy this equation form an ellipse, i.e., c2 = l1(xTe1)2 + l2(xT e2)2 (using SPD of p.d. A). What will be the case if we replace A by A-1 ? e2 Var(eTi x)=eiTVar(X)eT= Λi ith eigen value ofVar(X) e1 All points at same ellipse-distance. Let x1 = c l1-1/2 e1 and x2 = c λ2-1/2 e2. Both x satisfy the above equation in the direction of eigenvector. Note that the length of x is c l1-1/2. ||x|| is inversely propotional to sqrt of eigen values of A. 61
62
Matrix Inequalities and Maximization
- Extended Cauchy-Schwartz Inequality – Let b and d be any two p x 1 vectors and B be a p x p positive definite matrix. Then (b’d)2 (b’Bb)(d’B-1d) with equality iff b=kB-1d or (or d=kB -1d) for some constant c. - Maximization Lemma – let d be a given p x 1 vector and B be a p x p positive definite matrix. Then for an arbitrary nonzero vector x with the maximum attained when x = kB-1d for any constant k.
63
Matrix Inequalities and Maximization
- Maximization of Quadratic Forms for Points on the Unit Sphere – let B be a p x p positive definite matrix with eigenvalues l1 l2 lp and associated eigenvectors e1, e2, ,ep. Then
64
Calculation in R t<-sqrt(2) x<-c(3.0046,t,t,16.9967)
A<-matrix(x, nrow=2) eigen(A) > x [1] > A<-matrix(x, nrow=2) > A [,1] [,2] [1,] [2,] > eigen(A) $values [1] $vectors [,1] [,2] [1,] [2,]
65
Thank you
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.