Matrices CHAPTER 8.9 ~ 8.16. Ch8.9-8.16_2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

Ch 7.7: Fundamental Matrices
Chapter 4 Systems of Linear Equations; Matrices
Chapter 4 Systems of Linear Equations; Matrices
8 CHAPTER Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1.
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
8.13 Cryptography Introduction Secret writing means code. A simple code Let the letters a, b, c, …., z be represented by the numbers 1, 2, 3, …, 26. A.
Ch 7.9: Nonhomogeneous Linear Systems
Chapter 6 Eigenvalues.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Orthogonality and Least Squares
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Matrices CS485/685 Computer Vision Dr. George Bebis.
5.1 Orthogonality.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Systems and Matrices (Chapter5)
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
PHY 301: MATH AND NUM TECH Chapter 5: Linear Algebra Applications I.Homogeneous Linear Equations II.Non-homogeneous equation III.Eigen value problem.
CHAPTER 2 MATRIX. CHAPTER OUTLINE 2.1 Introduction 2.2 Types of Matrices 2.3 Determinants 2.4 The Inverse of a Square Matrix 2.5 Types of Solutions to.
Chapter 5: The Orthogonality and Least Squares
Eigenvalues and Eigenvectors
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Diagonalization and Similar Matrices In Section 4.2 we showed how to compute eigenpairs (,p) of a matrix A by determining the roots of the characteristic.
Chapter 10 Real Inner Products and Least-Square
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Ch11: Normal Modes 1. Review diagonalization and eigenvalue problem 2. Normal modes of Coupled oscillators (3 springs, 2 masses) 3. Case of Weakly coupled.
Learning Objectives for Section 4.5 Inverse of a Square Matrix
Chapter 6 Eigenvalues. Example In a certain town, 30 percent of the married women get divorced each year and 20 percent of the single women get married.
7.1 Eigenvalues and Eigenvectors
5.1 Eigenvalues and Eigenvectors
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Linear Algebra Chapter 2 Matrices.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Chapter 5 Eigenvalues and Eigenvectors
Chapter 4 Systems of Linear Equations; Matrices
Boyce/DiPrima 10th ed, Ch 7.9: Nonhomogeneous Linear Systems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E.
Mathematics-I J.Baskar Babujee Department of Mathematics
Systems of Linear Differential Equations
CS479/679 Pattern Recognition Dr. George Bebis
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and vector spaces
CHAPTER 8.9 ~ 8.16 Matrices.
Systems of First Order Linear Equations
Numerical Analysis Lecture 16.
Symmetric Matrices and Quadratic Forms
Numerical Analysis Lecture 17.
Chapter 4 Systems of Linear Equations; Matrices
EIGENVECTORS AND EIGENVALUES
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
Linear Algebra Lecture 35.
Eigenvalues and Eigenvectors
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Chapter 4 Systems of Linear Equations; Matrices
Presentation transcript:

Matrices CHAPTER 8.9 ~ 8.16

Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices  8.11 Approximation of Eigenvalues 8.11 Approximation of Eigenvalues  8.12 Diagonalization 8.12 Diagonalization  8.13 Cryptography 8.13 Cryptography  8.14 An Error-Correcting Code 8.14 An Error-Correcting Code  8.15 Method of Least Squares 8.15 Method of Least Squares  8.16 Discrete Compartmental Models 8.16 Discrete Compartmental Models

Ch _3 8.9 Power of Matrices  Introduction It is sometimes important to be able to quickly compute a power A m, m a positive integer, of an n × n matrix A: A 2 = AA, A 3 = AAA = A 2 A A 4 = AAAA = A 3 A = A 2 A 2 and so on.

Ch _4  If the characteristic equation is then (1) A matrix A satisfies its own characteristic equation. THEOREM 8.26 Cayley-Hamilton Theorem

Ch _5 Matrix of Order 2  Suppose then 2 − – 2 = 0. From Theorem 8.26, A 2 − A – 2I = 0 or A 2 = A + 2I (2) and also A 3 = A 2 + 2A = 2I + 3A A 4 = A 3 + 2A 2 = 6I + 5A A 5 = 10I + 11A A 6 = 22I + 21A(3)

Ch _6  From the above discussions, we can write A m = c 0 I + c 1 A and m = c 0 + c 1 (5) Using 1 = −1, 2 = −2, we have then (6)

Ch _7 Matrix of Order n  Similar to the previous discussions, we have A m = c 0 I + c 1 A + c 2 A 2 +…+ c n–1 A n–1 where c k, k = 0, 1,…, n–1, depend on m.

Ch _8 Example 1 Compute A m for Solution The characteristic equation is – 2 = 0, then 1 = –1, 2 = 1, 3 = 2. Thus A m = c 0 I + c 1 A +c 2 A 2, m = c 0 + c 1 + c 2 2 (7) In turn letting 1 = –1, 2 = 1, 3 = 2, we obtain (–1) m = c 0 – c 1 + c 2 1 = c 0 + c 1 + c 2 (8) 2 m = c 0 +2c 1 + 4c 2

Ch _9 Solving (8), Since A m = c 0 I + c 1 A +c 2 A 2, we have eg. m = 10

Ch _10 Finding the Inverse  then A 2 – A – 2I = 0, I = (1/2)A 2 – (1/2)A, Multiplying both sides by A –1, then A –1 = (1/2)A – (1/2)I Thus

Ch _ Orthogonal Matrices An n  n matrix A is symmetric if A = A T, where A T is The transpose of A. DEFINITION 8.14 Symmetric Matrix

Ch _12 Proof Let AK = K, then (1) Since A is real, (2) Let A be a symmetric matrix with real entries. Then the eigenvalues of A are real. THEOREM 8.27 Rear Eigenvalues

Ch _13 Take the transpose of (2), use the fact that A is symmetric and multiply on the right by K (3) Now AK = K, we have (4) Using (4) – (3) gives (5)

Ch _14 Since we have

Ch _15 Inner Product  x  y = x 1 y 1 + x 2 y 2 + … + x n y n (6) Similarly X  Y = X T Y = x 1 y 1 + x 2 y 2 + … + x n y n (7)

Ch _16 Proof Let 1,, 2 be two distinct eigenvalues corresponding to eigenvectors K 1 and K 2. Since AK 1 = 1 K 1, AK 2 = 2 K 2 (8) (AK 1 ) T = K 1 T A T = K 1 T A = 1 K 1 T Let A be a n × n symmetric matrix. The eigenvectors corresponding to distinct (different) eigenvalues are orthogonal. THEOREM 8.28 Orthogonal Eigenvectors

Ch _17 THEOREM 8.28 K 1 T AK 2 = 1 K 1 T K 2 (9) Since AK 2 = 2 K 2, K 1 T AK 2 = 2 K 1 T K 2 (10) (10) – (9) gives 0 = 1 K 1 T K 2 − 2 K 1 T K 2 or 0 = ( 1 − 2 ) K 1 T K 2 Since 1  2, then K 1 T K 2 = 0.

Ch _18 Example 1 The matrix has = 0, 1, −2 and

Ch _19 Example 1 (2) We find

Ch _20  A is orthogonal if A T A = I. An n × n nonsingular matrix A is orthogonal if A -1 = A T DEFINITION 8.15 Orthogonal Matrix

Ch _21 Example 2  (a) I is an orthogonal matrix, since I T I = II = I  (b) So, A is orthogonal.

Ch _22 Partial Proof We have A = (X 1, X 2, …, X n ), and A is orthogonal then An n × n matrix A is orthogonal if and only if its column X 1, X 2, …, X n form an orthonormal set. THEOREM 8.29 Criterion for an Orthogonal Matrix

Ch _23 THEOREM 8.29 It follows that X i T X j = 0, i  j, i, j =1, 2, …, n X i T X i = 1, i =1, 2, …, n Thus all X i form an orthonormal set.

Ch _24  Consider the matrix in example 2

Ch _25

Ch _26 And are unit vectors:

Ch _27 Example 3 In example 1, we have Since

Ch _28 Example 3 (2) Thus, an orthonormal set is

Ch _29 Example 3 (3) We have the orthogonal matrix Please verify that P T = P -1.

Ch _30 Example 4 For the symmetric matrix We can find = −9, −9, 9. As in Sec 8.8, we have

Ch _31 Example 4 (2)  From the last matrix we see  Now for

Ch _32 Example 4 (3) We find K 3  K 1 = K 3  K 2 = 0, K 1  K 2 = – 4  0 Using Gram-Schmidt process, V 1 = K 1 Now we have an orthogonal set and we can also make them an orthonormal set as

Ch _33 Example 4 (4) Then is orthogonal.

Ch _ Approximation of Eigenvalues Let denote the eigenvalues of an n × n matrix A. The eigenvalues is said to be the dominant eigenvalues of A if An eigenvector corresponding to is called the dominant eigenvector of A. DEFINITION 8.16 Dominant Eigenvalue

Ch _35 Example 1  (a) The matrix has eigenvalues. Since, it follows that there is dominant eigenvalue.  (b) The eigenvalues of the matrix Again, the matrix has no dominant eigenvalue.

Ch _36 Power Method  Look at the sequence (1) where X 0 is a nonzero n  1 vector that is an initial guess or approximation and A has a dominant eigenvalue.  Therefore, (2)

Ch _37  Let us make some further assumptions: | 1 | > | 2 |  …  | n | and the corresponding eigenvectors K 1, K 2, …, K n are linearly independent and can be a basis for R n. Thus (3) here we also assume that c 1  0.  Since AK i = i K i, then AX 0 = c 1 AK 1 + c 2 AK 2 + … + c n AK n becomes (4)

Ch _38  Multiplying (4) by A, (5) (6) Since | 1 | > | i |, i = 2, 3, …, n, as m  , we have (7)

Ch _39  However, the constant multiple of an eigenvector is also an eigenvector, then X m = A m X 0 is an approximation to a dominant eigenvector. Since AK = K, AK  K= K  K then (8) which is called the Rayleigh quotient.

Ch _40 Example 2  For the initial guess

Ch _41 Example 2 (2)  We have It appears then that the vectors are approaching scalar multiples of i34567 XiXi

Ch _42 Example 2 (3)

Ch _43  The remainder of this section is neglected since it is of less importance.

Ch _44 Scaling

Ch _45 Example 3 Repeat the iterations of Examples 2 using scaled-down vectors. Solution From

Ch _46 Example 3 (2) We defined We continuous in this manner to construct the following table: In contrast to the table in Example 3, it is apparent from this table that the vectors are approaching i34567 XiXi

Ch _47 Method of Deflation  The Procedure we shall consider next is a modification of the power method and is called the method of deflation. We will limit the discussion to the case where A is a symmetric matrix.  Suppose 1 and K 1 are the dominant eigenvalue and a corresponding normalized eigenvector of a symmetric matrix A. Furthermore, suppose the eigenvalues of A are such that It can be proved that the matrix

Ch _ Diagonalization  Diagonalizable Matrices If there exist a matrix P, such that P -1 AP = D is diagonal, then A is said to be diagonalizable. If an n × n matrix A has n linearly independent Eigenvectors K 1, K 2, …, K n, then A is diagonalizable. THEOREM 8.30 Sufficient Condition for Diagonalizability

Ch _49 THEOREM 8.30  Proof Since P = (K 1, K 2, K 3 ) is nonsingular, then P -1 exists, and Thus, P -1 AP = D

Ch _50 If an n  n matrix A is a diagonalizable of and only if A has n linearly independent eigenvalues. THEOREM 8.31 Criterion for Diagonalizability If an n  n matrix A has n distinct eigenvalues, it is aiagonalizable. THEOREM 8.32 Sufficient Condition for Diagonalizability

Ch _51 Example 1 Diagonalize Solution = 1, 4. Using the same process, we have then

Ch _52 Example 2 Consider We have

Ch _53 Example 2 (2) Now

Ch _54 Example 2 (3) Thus, P -1 AP = D.

Ch _55 Example 3 Consider We have = 5, 5. Since we can only find a single eigenvector this matrix can not be diagonalizable.

Ch _56 Example 4 Consider We have = −1, 1, 1. For = −1, For = 1, Use Gauss-Jordan method

Ch _57 Example 4 (2) We can have Since we have three linearly independent eigenvectors, A is diagonalizable. Let then P -1 AP = D.

Ch _58 Example 4 (3) Since we have three linearly independent eigenvectors, A is diagonalizable. Let then P -1 AP = D.

Ch _59 Orthogonally Diagonalizable  There exists an orthogonal matrix P, which can diagonalize A. Then A is said to be orthogonally diagonalizable. An n  n matrix A can be orthogonally diagonalizable If and only if A is symmetric. THEOREM 8.33 Criterion for Orthogonal Diagonalizability

Ch _60 THEOREM 8.33  Partial Proof Assume an n  n matrix A can be orthogonally diagonalizable, then there exits an orthogonal matrix P such that P -1 AP = D. A = PDP -1. Since P is orthogonal, P -1 = P T, then A = PDP T. However, A = (PDP T ) T = PD T P T = PDP T = A T Thus A is symmetric.

Ch _61 Example 5 Consider From Example 4 of Sec 8.8, we find However, they are not mutually orthogonal.

Ch _62 Example 5 (2) Now redo for = 8 We have k 1 + k 2 + k 3 = 0, choosing k 2 = 1, k 3 = 0, we get K 2 ; choosing k 2 = 0, k 3 = 1, we get K 3. If we choose them by another way: k 2 = 1, k 3 = 1 and k 2 = 1, k 3 = – 1.

Ch _63 Example 5 (3) We obtain two entirely different but orthogonal Thus an orthogonal set is

Ch _64 Example 5 (4) Since we obtain an orthonormal set.

Ch _65 Example 5 (5) Then and D = P -1 AP

Ch _66 Example 5 (6) This is verified form

Ch _67 Quadratic Forms  An algebraic expression of the form ax 2 + bxy + cy 2 (4) is called a quadratic form. If we let then (4) can be written as (5)  Note: is symmetric.

Ch _68 Example 6 Identify the conic section whose equation is 2x 2 + 4xy − y 2 = 1 Solution From (5) we have or X T AX = 1(6) where

Ch _69 Example 6 (2) For A, we have and K 1, K 2 are orthogonal. Moreover, an orthonormal set is

Ch _70 Example 6 (3) Hence we have the orthogonal matrix If we let X = PX’ where, then (7)

Ch _71 Example 6 (4) Using (7), (6) becomes or – 2X 2 + 3Y 2 = 1. See Fig 8.11

Ch _72 Fig 8.11

Ch _ Cryptography  Introduction Secret writing means code.  A simple code Let the letters a, b, c, …., z be represented by the numbers 1, 2, 3, …, 26. A sequence of letters cab then be a sequence of numbers. Arrange these numbers into an m  n matrix M. Then we select a nonsingular m  m matrix A. The new sent message becomes Y = AM, then M = A -1 Y.

Ch _ An Error Correcting Code  Parity Encoding Add an extra bit to make the number of one is even

Ch _75 Example 2 (a) W = ( ) (b) W = ( ) Solution (a) The extra bit will be 1 to make the number of one is 4 (even). The code word is then C = ( ). (b) The extra bit will be 0 to make the number of one is 4 (even). So the edcoded word is C = ( ).

Ch _76 Fig 8.12

Ch _77 Example 3 Decoding the following (a) R = ( ) (b) R = ( ) Solution (a) The number of one is 4 (even), we just drop the last bit to get ( ). (b) The number of one is 3 (odd). It is a parity error.

Ch _78 Hamming Code where c 1, c 2, and c 3 denote the parity check bits.

Ch _79 Encoding

Ch _80 Example 4 Encode the word W = ( ). Solution

Ch _81 Decoding

Ch _82 Example 5 Compute the syndrome of (a) R = ( ) and (b) R = ( ) Solution (a) we conclude that R is a code word. By the check bits in ( ), we get the decoded message ( ).

Ch _83 Example 5 (2) (b) Since S  0, the received message R is not a code word.

Ch _84

Ch _85

Ch _86 Example 6 Changing zero to one gives the code word C = ( ). Hence the first, second, and fourth bits from C we arrive at the decoded message ( ).

Ch _ Method of Least Squares  Example 2 If we have the data (1, 1), (2, 3), (3, 4), (4, 6), (5,5), we want to fit the function f(x) =ax + b. Then a + b = 1 2a + b = 3 3a + b = 4 4a + b = 6 5a + b = 5

Ch _88 Example 2 (2) Let we have

Ch _89 Example 2 (3)

Ch _90 Example 2 (4) We have AX = Y. Then the best solution of X will be X = (A T A) -1 A T Y = (1.1, 0.5) T. For this line the sum of the square error is The fit function is y = 1.1x + 0.5

Ch _91 Fig 8.15

Ch _ Discrete Compartmental Models  The General Two-Compartment Model

Ch _93 Fig 8.16

Ch _94 Discrete Compartmental Model

Ch _95

Ch _96 Fig 8.17

Ch _97 Example 1  See Fig The initial amount is 100, 250, 80 for these three compartment. For Compartment 1 (C1): 20% to C2 0% to C3 then80% to C1 For C2: 5% to C1 30% to C3then65% to C2 For C3: 25% to C1 0% to C3then75% to C3

Ch _98 Fig 8.18

Ch _99 That is, New C1 = 0.8C C C3 New C2 = 0.2C C2 + 0C3 New C3 = 0C C C3 We get the transfer matrix as Example 1 (2)

Ch _100 Example 1 (3) Then one day later,

Ch _101  Note: m days later, Y = T m X 0

Ch _102 Example 2

Ch _103 Example 2 (2)

Ch _104 Example 2 (3)