ENGG2013 Unit 19 The principal axes theorem Mar, 2011.
Outline Special matrices Principle axes theorem Symmetric, skew-symmetric, orthogonal Principle axes theorem Application to conic sections kshum ENGG2013
Diagonalizable ?? A square matrix M is called diagonalizable if we can find an invertible matrix, say P, such that the product P–1 M P is a diagonal matrix. Example Some matrix cannot be diagonalized. kshum ENGG2013
Theorem An nn matrix M is diagonalizable if and only if we can find n linear independent eigenvectors of M. Proof: For concreteness, let’s just consider the 33 case. The three columns are linearly independent because the matrix is invertible \begin{bmatrix} x_1 & x_2 & x_3\\ y_1 & y_2 & y_3\\ z_1 & z_2 & z_3\\ \end{bmatrix}^{-1}\begin{bmatrix} a & b & c\\ d & e & f\\ g & h & i \end{bmatrix} \begin{bmatrix} \end{bmatrix}=\begin{bmatrix} \lambda_1 & & \\ & \lambda_2& \\ & & \lambda_3 \end{bmatrix} by definition kshum ENGG2013
Proof continued and and \begin{bmatrix} a & b & c\\ d & e & f\\ g & h & i \end{bmatrix} \begin{bmatrix} x_1 & x_2 & x_3\\ y_1 & y_2 & y_3\\ z_1 & z_2 & z_3\\ \end{bmatrix}=\begin{bmatrix} \end{bmatrix}\begin{bmatrix} \lambda_1 & & \\ & \lambda_2& \\ & & \lambda_3 \end{bmatrix} kshum ENGG2013
Complex eigenvalue There are some matrices whose eigenvalues are complex numbers. For example: the matrix which represents rotation by 45 degree counter-clockwise. kshum ENGG2013
Theorem If an nn matrix M has n distinct eigenvalues, then M is diagonalizable \begin{bmatrix} a & b & c\\ d & e & f\\ g & h & i \end{bmatrix}\begin{bmatrix} x_k\\ y_k\\ z_k\\ \end{bmatrix}= \lambda_k \begin{bmatrix} \end{bmatrix} \text{ for }k=1,2,3 The converse is false: There is some diagonalizable matrix with repeated eigenvalues. kshum ENGG2013
Matrix in special form Symmetric: AT=A. Skew-symmetric: AT= –A. Orthogonal: AT =A-1, or equivalently AT A = I. Examples: symmetric and orthogonal symmetric skew-symmetric kshum ENGG2013
Orthogonal matrix A matrix M is called orthogonal if Each column has norm 1 MT I M Dot product = 1 \begin{bmatrix} m_{11} & m_{12} & m_{13} & \cdots & m_{1n}\\ m_{21} & m_{22} & m_{23} & \cdots & m_{2n}\\\ m_{31} & m_{32} & m_{33} & \cdots & m_{3n}\\ \vdots & \vdots & \vdots & \ddots & \\ m_{n1} & m_{n2} & m_{n3} & \cdots & m_{nn} \end{bmatrix}\begin{bmatrix} m_{11} & m_{21} & m_{31} & \cdots & m_{n1}\\ m_{12} & m_{22} & m_{32} & \cdots & m_{n2}\\\ m_{13} & m_{23} & m_{33} & \cdots & m_{n3}\\ m_{1n} & m_{2n} & m_{3n} & \cdots & m_{nn} \end{bmatrix} kshum
Orthogonal matrix A matrix M is called orthogonal if Any two distinct columns are orthogonal Dot product = 0 \begin{bmatrix} m_{11} & m_{12} & m_{13} & \cdots & m_{1n}\\ m_{21} & m_{22} & m_{23} & \cdots & m_{2n}\\\ m_{31} & m_{32} & m_{33} & \cdots & m_{3n}\\ \vdots & \vdots & \vdots & \ddots & \\ m_{n1} & m_{n2} & m_{n3} & \cdots & m_{nn} \end{bmatrix}\begin{bmatrix} m_{11} & m_{21} & m_{31} & \cdots & m_{n1}\\ m_{12} & m_{22} & m_{32} & \cdots & m_{n2}\\\ m_{13} & m_{23} & m_{33} & \cdots & m_{n3}\\ m_{1n} & m_{2n} & m_{3n} & \cdots & m_{nn} \end{bmatrix} kshum
Principal axes theorem Given any nn symmetric matrix A, we have: The eigenvalues of A are real. A is diagonalizable. We can pick n mutually perpendicular (aka orthogonal) eigenvectors. Q Proof omitted. http://en.wikipedia.org/wiki/Principal_axis_theorem kshum ENGG2013
Two sufficient conditions for diagonalizability Distinct eigenvalues Symmetric, skew-symmetric, orthogonal Diagonalizable kshum ENGG2013
Example kshum ENGG2013
Similarity Definition: We say that two nn matrix A and B are similar if we can find an invertible matrix S such that Example: and are similar, The notion of diagonalization can be phrased in terms of similarity: matrix A is diagonalizable if and only if A is similar to a diagonal matrix. \begin{bmatrix} 3 & 0\\ -1 & 1 \end{bmatrix}^{-1} -1 & 0\\ 2/3 & 1 \end{bmatrix} \end{bmatrix} = 0 & 1 kshum
More examples is similar to because and are similar. \begin{bmatrix} 1 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0 \end{bmatrix}^{-1}\begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9 \end{bmatrix} \begin{bmatrix} \end{bmatrix} = \begin{bmatrix} 1 & 3 & 2\\ 7 & 9 & 8\\ 4 & 6 & 5 \end{bmatrix} kshum
Application to conic sections Ellipse : x2/a + y2/b = 1. Hyperbola : x2/a – y2/b = 1. Parabola y = ax2. kshum ENGG2013
Application to conic sections Is x2 – 4xy +2y2 = 1 a ellipse, or a hyperbola? Rewrite using symmetric matrix Find the characteristic polynomial Solve for the eigenvalues kshum
Application to conic sections Diagonalize Change coordinates \begin{bmatrix} x & y \end{bmatrix}\mathbf{P}^{-1} \begin{bmatrix} 1 & -2\\ -2 & 2 \end{bmatrix} \mathbf{P} x\\y \end{bmatrix} = 1 Hyperbola kshum
x2 – 4xy +2y2 = 1 kshum
2x2 + 2xy + 2y2 = 1 Rewrite using symmetric matrix Compute the characteristic polynomial Find the eigenvalues kshum
2x2 + 2xy + 2y2 = 1 Columns of P are eigenvectors, normalized to norm 1. Diagonalize Change of variables kshum
2x2 + 2xy + 2y2 = 1 v u kshum