Quadratic Forms, Characteristic Roots and Characteristic Vectors

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

Chapter 4 Euclidean Vector Spaces
Eigen Decomposition and Singular Value Decomposition
3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Chapter 6 Eigenvalues and Eigenvectors
8 CHAPTER Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1.
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
Computer Graphics Recitation 5.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 08 Chapter 8: Linear Transformations.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Stats & Linear Models.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
CHAPTER SIX Eigenvalues
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
 Row and Reduced Row Echelon  Elementary Matrices.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Eigen Decomposition Based on the slides by Mani Thomas Modified and extended by Longin Jan Latecki.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Appendix A. Mathematical Background EE692 Parallel and Distribution Computation.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Chapter 6 Eigenvalues. Example In a certain town, 30 percent of the married women get divorced each year and 20 percent of the single women get married.
Stats & Summary. The Woodbury Theorem where the inverses.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices, Vectors, Determinants.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
Chapter 6 Eigenvalues and Eigenvectors
Mathematics-I J.Baskar Babujee Department of Mathematics
Introduction to Vectors and Matrices
ISHIK UNIVERSITY FACULTY OF EDUCATION Mathematics Education Department
CS479/679 Pattern Recognition Dr. George Bebis
Matrices and vector spaces
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Matrices and Vectors Review Objective
Systems of First Order Linear Equations
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Numerical Analysis Lecture 16.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Chapter 3 Linear Algebra
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Matrix Algebra and Random Vectors
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Eigen Decomposition Based on the slides by Mani Thomas
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Introduction to Vectors and Matrices
Eigen Decomposition Based on the slides by Mani Thomas
Presentation transcript:

Quadratic Forms, Characteristic Roots and Characteristic Vectors Mohammed Nasser Professor, Dept. of Statistics, RU,Bangladesh Email: mnasser.ru@gmail.com The use of matrix theory is now widespread .- - - -- are essential in ----------modern treatment of univeriate and multivariate statistical methods. ----------C.R.Rao 1 DRRSDM 1

Linear Map and Matrices Contents Linear Map and Matrices Quadratic Forms and Its Applications in MM Classification of Quadratic Forms Quadratic Forms and Inner Product Definitions of Characteristic Roots and Characteristic Vectors Geometric Interpretations Properties of Grammian Matrices Spectral Decomposition and Applications Matrix Inequalities and Maximization Computations 2

Relation between MM (ML) and Vector space Statistical Concepts/Techniques Concepts in Vector space Variance Length of a vector, Qd. forms Covariance Dot product of two vectors Correlation Angle bt.two vectors Regression and Classification Mapping bt two vector sp. PCA/LDA/CCA Orthogonal/oblique projection on lower dim.

Some Vector Concepts Dot product = scalar Length of a vector x2 || x || = (x12+ x22 + x32 )1/2 Inner product of a vector with itself = (vector length)2 xT x =x12+ x22 +x32 = (|| x ||)2 x1 x2 ||x|| Right-angle triangle Pythagoras’ theorem Length of a vector x2 || x || = (x12+ x22)1/2 x1

Some Vector Concepts Angle between two vectors ||x|| x  b q ||x|| ||y|| y2 y1 x Orthogonal vectors: xT y = 0 x y =/2

Linear Map and Matrices Linear mappings are almost omnipresent If both domain and co-domain are both finite-dimensional vector space, each linear mapping can be uniquely represented by a matrix w.r.t. specific couple of bases We intend to study properties of linear mapping from properties of its matrix

Linear map and Matrices This isomorphism is basis dependent

Linear map and Matrices Let A be similar to B, i.e. B=P-1AP Similarity defines an equivalent relation in the vector space of square matrices of orde n, i.e. it partitions the vector space in to different equivalent classes. Each equivalent class represents unique linear operator How can we choose the simplest one in each equivalent class and ii) The one of special interest ??

Linear map and Matrices Two matrices representing the same linear transformation with respect to different bases must be similar. A major concern of ours is to make the best choice of basis, so that the linear operator with which we are working will have a representing matrix in the chosen basis that is as simple as possible. A diagonal matrix is a very useful matrix, for example, Dn=P-1AnP

Linear map and Matrices Each equivalent class represent unique linear operator Can we characterize the class in simpler way? Yes, we can Under extra conditions The concept , characteristic roots plays an important role in this regards

Quadratic Form Definition: The quadratic form in n variables x1, x2, …, xn is the general homogeneous function of second degree in the variables In terms of matrix notation, the quadratic form is given by

Examples of Some Quadratic Forms 1 2. 3. Standard form What is its uses?? A can be many for a particular quadratic form. To make it unique it is customary to write A as symmetric matrix.

In Fact Infinite A’s Symmetric A For example 1 we have to take a12, and a21 s.t.a12 +a21 =6. We can do it in infinite ways. Symmetric A

Its Importance in Statistics Variance is a fundamental concept in statistics. It is nothing but a quadratic form with a idempotent matrix of rank (n-1) Quadratic forms play a central role in multivariate statistical analysis. For example, principal component analysis, factor analysis, discriminant analysis etc. 14

Multivariate Gaussian Its Importance in Statistics Multivariate Gaussian 15

Its Importance in Statistics Bivariate Gaussian 16

Spherical, diagonal, full covariance UM 17

Quadratic Form as Inner Product Length ofY, ||Y||= (YTY)1/2; XTY, dot product of X andY Let A=CTC. Then XT AX= XTCTCX=(CX)TCX=YTY XT AY= XTCTCY=(CX)TCY=WT Z What is its geometric meaning ? XT AX=(AT X)TX = XT (AX) = XT Y Different nonsingular Cs represent different inner products Different inner products different geometries.

Euclidean Distance and Mathematical Distance Usual human concept of distance is Eucl. Dist. Each coordinate contributes equally to the distance Mathematicians, generalizing its three properties , 1) d(P,Q)=d(Q,P).2) d(P,Q)=0 if and only if P=Q and 3) d(P,Q)=<d(P,R)+d(R,Q) for all R, define distance on any set.

Statistical Distance Weight coordinates subject to a great deal of variability less heavily than those that are not highly variable Who is nearer to data set if it were point?

Statistical Distance for Uncorrelated Data

Ellipse of Constant Statistical Distance for Uncorrelated Data x2 x1

Scattered Plot for Correlated Measurements

Statistical Distance under Rotated Coordinate System

General Statistical Distance

Necessity of Statistical Distance

Mahalonobis Distance Population version: Sample veersion; We can robustify it using robust estimators of location and scatter functional

Classification of Quadratic Form Chart: 28

Classification of Quadratic Form Definitions 1. Positive Definite: A quadratic form Y=XTAX is said to be positive definite iff Y=XTAX>0 ; for all x≠0 . Then the matrix A is said to be a positive definite matrix. 2. Positive Semi-definite:A quadratic form, Y=XTAX is said to be positive semi-definite iff Y=XTAX>=0 , for all x≠0 and there exists x≠0 such that XTAX=0 . Then the matrix A is said to be a positive semidefinite matrix. 3. Negative Definite: A quadratic form Y=XTAX is said to be negative definite iff Y=XTAX<=0 for all x≠0. Then the matrix A is said to be negative definite matrix 29

Classification of Quadratic Form Definitions 4. Negative Semi-definite: A quadratic form, is said to be negative semi-definite iff , for all x≠0 and there exists x≠0 such that . The matrix A is said to be a negative semi-definite matrix. Indefinite: Quadratic forms and their associated symmetric matrices need not be definite or semi-definite in any of the above scenes. In this case the quadratic form is said to be indefinite; that is , it can be negative, zero or positive depending on the values of x. 30

Two Theorems On Quadratic Form Theorem(1): A quadratic form can always be expressed with respect to a given coordinate system as . where A is a unique symmetric matrix. Theorem2: Two symmetric matrices A and B represent the same quadratic form if and only if B=PTAP where P is a non-singular matrix.

Importance of Standard Form Classification of Quadratic Form Importance of Standard Form From standard form we can easily classify a quadratic form. XT AX= Is positive /positive semi/negative/ negative semidifinite/indefinite if ai >0 for all i/ ai >0 for some i others, a=0/ai <0 for all i,/ ai <0 some i , others, a=0/ Some ai are +ive, some are negative.

Importance of Standard Form Classification of Quadratic Form Importance of Standard Form That is why using suitable nonsingular trandformation ( why nonsingular??) we try to transform general XT AX into a standard form. If we can find a P nonsingular matrix s.t. we can easily classify it. We can do it i) for congruent transformation and ii) using eigenvalues and eigen vectors. Method 2 is mostly used in MM

Importance of Determinant, Eigen Values and Diagonal Element Classification of Quadratic Form Importance of Determinant, Eigen Values and Diagonal Element 1. Positive Definite: (a). A quadratic form is positive definite iff the nested principal minors of A is given as Evidently a matrix A is positive definite only if det(A)>0 (b). A quadratic form Y=XTAX be positive definite iff all the eigen values of A are positive.

Importance of Determinant, Eigen Values and Diagonal Element Classification of Quadratic Form Importance of Determinant, Eigen Values and Diagonal Element 2. Positive Semi-definite:(a) A quadratic form is positive semi-definite iff the nested principal minors of A is given as (b). A quadratic form Y=XTAX is positive semi-definite iff at least one eigen value of A is zero while the remaining roots are positive.

Continued 3. Negative Definite: (a). A quadratic form is negative definite iff the nested principal minors of A are given as Evidently a matrix A is negative definite only if (-1)n× det(A)>0; where det(A) is either negative or positive depending on the order n of A. (b). A quadratic form Y=XTAX be negative definite iff all the eigen Roots of A are negative.

Continued 4. Negative Semi-definite:(a)A quadratic form is negative semi-definite iff the nested principal minors of A is given as Evidently a matrix A is negative semi-definite only if ,that is, det(A)≥0 ( det(A)≤0 ) when n is odd( even). (b). A quadratic form is negative semi-definite iff at least one eigen value of A is zero while the remaining roots are negative.

Theorem on Quadratic Form (Congruent Transformation) If is a real quadratic form of n variables x1, x2, …, xn and rank r i.e. ρ(A)=r then there exists a non-singular matrix P of order n such that x=Pz will convert Y in the canonical form where λ1, λ2, …, λr are all the different from zero. That implies

Grammian (Gram)Matrix Grammian Matrix -----If A be n×m matrix then the matrix S=ATA is called grammian matrix of A. If A is m×n then S=ATA is a symmetric n-rowed matrix. Properties Every positive definite or positive semi-definite matrix can be represented as a Grammian matrix The Grammian matrix ATA is always positive definite or positive semi-definite according as the rank of A is equal to or less than the number of columns of A c. d. If ATA=0 then A=0

A(x) = (Ax) = (x) = (x) What are eigenvalues? Given a matrix, A, x is the eigenvector and  is the corresponding eigenvalue if Ax = x A must be square and the determinant of A -  I must be equal to zero Ax - x = 0 ! (A - I) x = 0 Trivial solution is if x = 0 The non trivial solution occurs when det(A - I) = 0 Are eigenvectors are unique? If x is an eigenvector, then x is also an eigenvector and  is an eigenvalue of A, A(x) = (Ax) = (x) = (x)

Calculating the Eigenvectors/values Expand the det(A - I) = 0 for a 2 × 2 matrix For a 2 × 2 matrix, this is a simple quadratic equation with two solutions (maybe complex) This “characteristic equation” can be used to solve for x

Eigenvalue example Consider, The corresponding eigenvectors can be computed as For  = 0, one possible solution is x = (2, -1) For  = 5, one possible solution is x = (1, 2)

Geometric Interpretation of eigen roots and vectors We know from the definition of eigen roots and vectors Ax = λx; (**) where A is m×m matrix, x is m tuples vector and λ is scalar quantity. From the right side of (**) we see that the vector is multiplied by a scalar. Hence the direction of x and λx is on the same line. The left side of(**)shows the effect of matrix multiplication of matrix A (matrix operator) with vector x. But matrix operator may change the direction and magnitude of the vector.

Geometric Interpretation of eigen roots and vectors Hence our goal is to find such kind of vectors that change in magnitude but remain on the same line after matrix multiplication. Now the question arises: does these eigen vectors along with their respective change in magnitude characterize the matrix? Answer is the DECOMPOSITION THEOREMS

Geometric Interpretation of eigen Roots and Vectors Y Y [A] Ax2 x1 x2 Ax1 X X Z Z

More to Notice

Properties of Eigen Values and Vectors If B=CAC-1, where A, B and C are all n×n then A and B have same eigen roots. If x is the eigen Vector of A then Cx for B The eigen roots of A and AT are same. A eigen Vector x≠o can not be associated with more than one eigen Root The eigen Vectors of a matrix A are linearly independent if they corresponds to distinct roots. Let A be a square matrix of order m and suppose all its roots are distinct. Then A is similar to a diagonal matrix Λ,i.e. P-1AP= Λ. eigen Roots and vectors are all real for any real symmetric matrix, A If λi and λj are two distinct roots of a real symmetric matrix A, then vectors xi and xj are orthogonal 47

Properties of Eigen Values and Vectors If λ1, λ2, … , λm are the eigen roots of the non-singular matrix A then λ1-1, λ2-1, … , λm-1 are the eigen roots of A-1. Let A, B be two square matrices of order m. Then the eigen roots of AB are exactly the eigen roots of BA. Let A, B be respectively m×n and n×m matrices, where m≤n. Then the eigen Roots of (BA)n×n consists of n-m zeros and the m eigen Roots of (AB)m×m. 48

Properties of Eigen Values and Vectors Let A be a square matrix of order m and λ1, λ2, … , λm be its eigen Roots then . Let A be a m×m matrix with eigen Roots λ1, λ2, … , λm then tr(A) = tr(Λ) = λ1+ λ2+ … + λm . If A has eigen Roots λ1, λ2, … , λm then A-kI has eigen Roots λ1-k, λ2-k, … , λm-k and kA has the eigen Roots kλ1, kλ2, … , kλm , where k is scalar. If A is an orthogonal matrix then all its eigen Roots have absolute value 1. Let A be a square matrix of order m; suppose further that A is idempotent. Then its eigen Roots are either 0 or 1. 49

Eigen/diagonal Decomposition Let be a square matrix with m linearly independent eigenvectors (a “non-defective” matrix) Theorem: Exists an eigen decomposition (cf. matrix diagonalization theorem) Columns of U are eigenvectors of S Diagonal elements of are eigenvalues of Unique for distinct eigen-values diagonal

Diagonal decomposition: why/how Let U have the eigenvectors as columns: Then, SU can be written Thus SU=U, or U–1SU= And S=UU–1. UM

Diagonal decomposition - example Recall The eigenvectors and form Recall UU–1 =1. Inverting, we have Then, S=UU–1 = UM

Example continued  Let’s divide U (and multiply U–1) by Then, S= Q (Q-1= QT ) Why? …

Symmetric Eigen Decomposition If is a symmetric matrix: Theorem: Exists a (unique) eigen decomposition where Q is orthogonal: Q-1= QT Columns of Q are normalized eigenvectors Columns are orthogonal. (everything is real)

Spectral Decomposition theorem If A is a symmetric m ×m matrix with i and ei, i = 1  m being the m eigenvector and eigenvalue pairs, then This is also called the eigen( spectral) decomposition theorem Any symmetric matrix can be reconstructed using its eigenvalues and eigenvectors

Example for Spectral Decomposition Let A be a symmetric, positive definite matrix The eigenvectors for the corresponding eigenvalues are Consequently,

The Square Root of a Matrix The spectral decomposition allows us to express a square matrix in terms of its eigenvalues and eigenvectors. This expression enables us to conveniently create a square root matrix. A is a p x p positive definite matrix with the spectral decomposition: If P = [ e1 e2 e3 … ep] where P’P = PP’ = I and L = diag(li).

The Square Root of a Matrix Let This implies (PL1/2P’)PL1/2P’ = PL1/2L1/2P’ = (PLP’) The matrix Is called he square root of A

Matrix Square Root Properties The square root of A has the following properties(prove them):

Physical Interpretation of SPD (Spectral Decomposition) Suppose xT Ax = c2. For p = 2, all x that satisfy this equation form an ellipse, i.e., c2 = l1(xTe1)2 + l2(xT e2)2 (using SPD of p.d. A). What will be the case if we replace A by A-1 ? e2 Var(eTi x)=eiTVar(X)eT= Λi ith eigen value ofVar(X) e1 All points at same ellipse-distance. Let x1 = c l1-1/2 e1 and x2 = c λ2-1/2 e2. Both x satisfy the above equation in the direction of eigenvector. Note that the length of x is c l1-1/2. ||x|| is inversely propotional to sqrt of eigen values of A. 61

Matrix Inequalities and Maximization - Extended Cauchy-Schwartz Inequality – Let b and d be any two p x 1 vectors and B be a p x p positive definite matrix. Then (b’d)2  (b’Bb)(d’B-1d) with equality iff b=kB-1d or (or d=kB -1d) for some constant c. - Maximization Lemma – let d be a given p x 1 vector and B be a p x p positive definite matrix. Then for an arbitrary nonzero vector x with the maximum attained when x = kB-1d for any constant k.

Matrix Inequalities and Maximization - Maximization of Quadratic Forms for Points on the Unit Sphere – let B be a p x p positive definite matrix with eigenvalues l1  l2   lp and associated eigenvectors e1, e2, ,ep. Then

Calculation in R t<-sqrt(2) x<-c(3.0046,t,t,16.9967) A<-matrix(x, nrow=2) eigen(A) > x [1] 3.004600 1.414214 1.414214 16.996700 > A<-matrix(x, nrow=2) > A [,1] [,2] [1,] 3.004600 1.414214 [2,] 1.414214 16.996700 > eigen(A) $values [1] 17.138207 2.863093 $vectors [,1] [,2] [1,] 0.09956317 -0.99503124 [2,] 0.99503124 0.09956317

Thank you