CHAPTER 8.9 ~ 8.16 Matrices.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

Ch 7.7: Fundamental Matrices
Chapter 4 Systems of Linear Equations; Matrices
Chapter 4 Systems of Linear Equations; Matrices
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Symmetric Matrices and Quadratic Forms
8.13 Cryptography Introduction Secret writing means code. A simple code Let the letters a, b, c, …., z be represented by the numbers 1, 2, 3, …, 26. A.
Ch 7.9: Nonhomogeneous Linear Systems
Chapter 6 Eigenvalues.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Orthogonality and Least Squares
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
5.1 Orthogonality.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
Chapter 5: The Orthogonality and Least Squares
Eigenvalues and Eigenvectors
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Learning Objectives for Section 4.5 Inverse of a Square Matrix
Chapter 6 Eigenvalues. Example In a certain town, 30 percent of the married women get divorced each year and 20 percent of the single women get married.
7.1 Eigenvalues and Eigenvectors
5.1 Eigenvalues and Eigenvectors
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Chapter 5 Eigenvalues and Eigenvectors
Chapter 4 Systems of Linear Equations; Matrices
Chapter 6 Eigenvalues and Eigenvectors
Boyce/DiPrima 10th ed, Ch 7.9: Nonhomogeneous Linear Systems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E.
7.3 Linear Systems of Equations. Gauss Elimination
Mathematics-I J.Baskar Babujee Department of Mathematics
Systems of Linear Differential Equations
MAT 322: LINEAR ALGEBRA.
Review of Linear Algebra
4. The Eigenvalue.
7.7 Determinants. Cramer’s Rule
CS479/679 Pattern Recognition Dr. George Bebis
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and vector spaces
Section 4.1 Eigenvalues and Eigenvectors
Boyce/DiPrima 10th ed, Ch 7.7: Fundamental Matrices Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E. Boyce and.
of Matrices and Vectors
Systems of First Order Linear Equations
Euclidean Inner Product on Rn
Chapter 2 Determinants Basil Hamed
CS485/685 Computer Vision Dr. George Bebis
Numerical Analysis Lecture 16.
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Chapter 6 Eigenvalues Basil Hamed
Learning Objectives for Section 4.5 Inverse of a Square Matrix
Symmetric Matrices and Quadratic Forms
Numerical Analysis Lecture 17.
Chapter 4 Systems of Linear Equations; Matrices
EIGENVECTORS AND EIGENVALUES
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
Linear Algebra Lecture 35.
Eigenvalues and Eigenvectors
Subject :- Applied Mathematics
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Chapter 4 Systems of Linear Equations; Matrices
Presentation transcript:

CHAPTER 8.9 ~ 8.16 Matrices

Contents 8.9 Power of Matrices 8.10 Orthogonal Matrices 8.11 Approximation of Eigenvalues 8.12 Diagonalization 8.13 Cryptography 8.14 An Error-Correcting Code 8.15 Method of Least Squares 8.16 Discrete Compartmental Models

8.9 Power of Matrices Introduction It is sometimes important to be able to quickly compute a power Am, m a positive integer, of an n × n matrix A: A2 = AA, A3 = AAA = A2A A4 = AAAA = A3A = A2A2 and so on.

A matrix A satisfies its own characteristic equation. THEOREM 8.26 A matrix A satisfies its own characteristic equation. Cayley-Hamilton Theorem If the characteristic equation is then (1)

Matrix of Order 2 Suppose then 2 −  – 2 = 0. From Theorem 8.26, A2 − A – 2I = 0 or A2 = A + 2I (2) and also A3 = A2 + 2A = 2I + 3A A4 = A3 + 2A2 = 6I + 5A A5 = 10I + 11A A6 = 22I + 21A (3)

From the above discussions, we can write From the above discussions, we can write Am = c0I + c1A and m = c0 + c1  (5) Using 1 = −1 , 2 = −2, we have then (6)

Matrix of Order n Similar to the previous discussions, we have Am = c0I + c1A + c2A2 +…+ cn–1An–1 where ck, k = 0, 1,…, n–1, depend on m.

Example 1 Compute Am for Solution The characteristic equation is 3 + 2 2 +  – 2 = 0, then 1 = –1, 2 = 1, 3 = 2. Thus Am = c0I + c1A +c2A2, m = c0 + c1 + c2 2 (7) In turn letting 1 = –1, 2 = 1, 3 = 2, we obtain (–1)m = c0 – c1 + c2 1 = c0 + c1 + c2   (8) 2m = c0 +2c1 + 4c2

Solving (8), Since Am = c0I + c1A +c2A2, we have eg. m = 10

Finding the Inverse then A2 – A – 2I = 0, I = (1/2)A2 – (1/2)A, Multiplying both sides by A–1, then A–1 = (1/2)A – (1/2)I Thus

8.10 Orthogonal Matrices DEFINITION 8.14 An n  n matrix A is symmetric if A = AT, where AT is The transpose of A. Symmetric Matrix

Let A be a symmetric matrix with real entries. Then the THEOREM 8.27 Let A be a symmetric matrix with real entries. Then the eigenvalues of A are real. Rear Eigenvalues Proof Let AK = K, then (1) Since A is real, (2)

Take the transpose of (2), use the fact that A is symmetric and multiply on the right by K (3) Now AK = K, we have (4) Using (4) – (3) gives (5)

Since we have

Inner Product x  y = x1 y1 + x2 y2 + … + xn yn (6) Similarly X  Y = XTY = x1 y1 + x2 y2 + … + xn yn (7)

Let A be a n × n symmetric matrix. The eigenvectors THEOREM 8.28 Let A be a n × n symmetric matrix. The eigenvectors corresponding to distinct (different) eigenvalues are orthogonal. Orthogonal Eigenvectors Proof Let 1,, 2 be two distinct eigenvalues corresponding to eigenvectors K1 and K2. Since AK1 = 1K1 , AK2 = 2K2 (8) (AK1)T = K1TAT = K1TA = 1K1T

THEOREM 8.28 K1TAK2 = 1K1TK2 (9) Since AK2 = 2K2, K1TAK2 = 2K1TK2 (10) (10) – (9) gives 0 = 1K1TK2 − 2K1TK2 or 0 = (1 − 2) K1TK2 Since 1  2, then K1TK2 = 0.

Example 1 The matrix has  = 0, 1, −2 and

Example 1 (2) We find

An n × n nonsingular matrix A is orthogonal if A-1 = AT DEFINITION 8.15 An n × n nonsingular matrix A is orthogonal if A-1 = AT Orthogonal Matrix A is orthogonal if ATA = I.

Example 2 (a) I is an orthogonal matrix, since ITI = II = I (b) So, A is orthogonal.

An n × n matrix A is orthogonal if and only if its column THEOREM 8.29 An n × n matrix A is orthogonal if and only if its column X1, X2, …, Xn form an orthonormal set. Criterion for an Orthogonal Matrix Partial Proof We have A = (X1, X2, …, Xn), and A is orthogonal then

THEOREM 8.29 It follows that XiTXj = 0, i  j , i, j =1, 2, …, n XiTXi = 1, i =1, 2, …, n Thus all Xi form an orthonormal set.

Consider the matrix in example 2

And are unit vectors:

Example 3 In example 1, we have Since

Example 3 (2) Thus, an orthonormal set is

Example 3 (3) We have the orthogonal matrix Please verify that PT = P-1.

Example 4 For the symmetric matrix We can find  = −9, −9, 9. As in Sec 8.8, we have

Example 4 (2) From the last matrix we see Now for

Example 4 (3) We find K3  K1 = K3  K2 = 0, K1  K2 = – 4  0 Using Gram-Schmidt process, V1 = K1 Now we have an orthogonal set and we can also make them an orthonormal set as

Example 4 (4) Then is orthogonal.

8.11 Approximation of Eigenvalues DEFINITION 8.16 Let denote the eigenvalues of an n × n matrix A. The eigenvalues is said to be the dominant eigenvalues of A if An eigenvector corresponding to is called the dominant eigenvector of A. Dominant Eigenvalue

Example 1 (a) The matrix has eigenvalues . Since , it follows that there is dominant eigenvalue. (b) The eigenvalues of the matrix Again, the matrix has no dominant eigenvalue.

Power Method Look at the sequence (1) where X0 is a nonzero n1 vector that is an initial guess or approximation and A has a dominant eigenvalue. Therefore, (2)

Let us make some further assumptions: Let us make some further assumptions: |1| > |2|  …  |n| and the corresponding eigenvectors K1, K2, …, Kn are linearly independent and can be a basis for Rn. Thus (3) here we also assume that c1  0. Since AKi = iKi , then AX0 = c1AK1 + c2AK2 + … + cnAKn becomes (4)

Multiplying (4) by A, (5) (6) Since |1| > |i|, i = 2, 3, …, n, as m  , we have (7)

However, the constant multiple of an eigenvector is also an eigenvector, then Xm = Am X0 is an approximation to a dominant eigenvector. Since AK = K, AK  K= K  K then (8) which is called the Rayleigh quotient.

Example 2 For the initial guess

Example 2 (2) i 3 4 5 6 7 Xi We have It appears then that the vectors are approaching scalar multiples of

Example 2 (3)

The remainder of this section is neglected since it is of less importance.

Scaling

Example 3 Repeat the iterations of Examples 2 using scaled-down vectors. Solution From

Example 3 (2) We defined We continuous in this manner to construct the following table: In contrast to the table in Example 3, it is apparent from this table that the vectors are approaching i 3 4 5 6 7 Xi

Method of Deflation The Procedure we shall consider next is a modification of the power method and is called the method of deflation. We will limit the discussion to the case where A is a symmetric matrix. Suppose 1 and K1 are the dominant eigenvalue and a corresponding normalized eigenvector of a symmetric matrix A. Furthermore, suppose the eigenvalues of A are such that It can be proved that the matrix

8.12 Diagonalization Diagonalizable Matrices If there exist a matrix P, such that P-1AP = D is diagonal, then A is said to be diagonalizable. If an n × n matrix A has n linearly independent Eigenvectors K1, K2, …, Kn, then A is diagonalizable. THEOREM 8.30 Sufficient Condition for Diagonalizability

THEOREM 8.30 Proof Since P = (K1, K2, K3) is nonsingular, then P-1 exists, and Thus, P-1AP = D

If an n  n matrix A is a diagonalizable of and only if THEOREM 8.31 If an n  n matrix A is a diagonalizable of and only if A has n linearly independent eigenvalues. Criterion for Diagonalizability If an n  n matrix A has n distinct eigenvalues, it is aiagonalizable. THEOREM 8.32 Sufficient Condition for Diagonalizability

Example 1 Diagonalize Solution  = 1, 4. Using the same process, we have then

Example 2 Consider We have

Example 2 (2) Now

Example 2 (3) Thus, P-1AP = D.

Example 3 Consider We have  = 5, 5. Since we can only find a single eigenvector this matrix can not be diagonalizable.

Example 4 Consider We have  = −1, 1, 1. For  = −1, For  = 1, Use Gauss-Jordan method

Example 4 (2) We can have Since we have three linearly independent eigenvectors, A is diagonalizable. Let then P-1AP = D.

Example 4 (3) Since we have three linearly independent eigenvectors, A is diagonalizable. Let then P-1AP = D.

Orthogonally Diagonalizable There exists an orthogonal matrix P, which can diagonalize A. Then A is said to be orthogonally diagonalizable. An n  n matrix A can be orthogonally diagonalizable If and only if A is symmetric. THEOREM 8.33 Criterion for Orthogonal Diagonalizability

THEOREM 8.33 Partial Proof Assume an nn matrix A can be orthogonally diagonalizable, then there exits an orthogonal matrix P such that P-1AP = D. A = PDP-1. Since P is orthogonal, P-1 = PT, then A = PDPT. However, A = (PDPT)T = PDTPT = PDPT = AT Thus A is symmetric.

Example 5 Consider From Example 4 of Sec 8.8, we find However, they are not mutually orthogonal.

Example 5 (2) Now redo for  = 8 We have k1 + k2 + k3 = 0, choosing k2 = 1, k3 = 0, we get K2; choosing k2 = 0, k3 = 1, we get K3. If we choose them by another way: k2 = 1, k3 = 1 and k2 = 1, k3 = – 1.

Example 5 (3) We obtain two entirely different but orthogonal Thus an orthogonal set is

Example 5 (4) Since we obtain an orthonormal set.

Example 5 (5) Then and D = P-1AP

Example 5 (6) This is verified form

Quadratic Forms An algebraic expression of the form ax2 + bxy + cy2 (4) is called a quadratic form. If we let then (4) can be written as (5) Note: is symmetric.

Example 6 Identify the conic section whose equation is 2x2 + 4xy − y2 = 1 Solution From (5) we have or XTAX = 1 (6) where

Example 6 (2) For A, we have and K1, K2 are orthogonal. Moreover, an orthonormal set is

Example 6 (3) Hence we have the orthogonal matrix If we let X = PX’ where , then (7)

Example 6 (4) Using (7), (6) becomes or – 2X2 + 3Y2 = 1. See Fig 8.11

Fig 8.11

8.13 Cryptography Introduction Secret writing means code. A simple code Let the letters a, b, c, …., z be represented by the numbers 1, 2, 3, …, 26. A sequence of letters cab then be a sequence of numbers. Arrange these numbers into an m  n matrix M. Then we select a nonsingular m  m matrix A. The new sent message becomes Y = AM, then M = A-1Y.

8.14 An Error Correcting Code Parity Encoding Add an extra bit to make the number of one is even

Example 2 (a) W = (1 0 0 0 1 1) (b) W = (1 1 1 0 0 1) Solution (a) The extra bit will be 1 to make the number of one is 4 (even). The code word is then C = (1 0 0 0 1 1 1). (b) The extra bit will be 0 to make the number of one is 4 (even). So the edcoded word is C = (1 1 1 0 0 1 0).

Fig 8.12

Example 3 Decoding the following (a) R = (1 1 0 0 1 0 1) (b) R = (1 0 1 1 0 0 0) Solution (a) The number of one is 4 (even), we just drop the last bit to get (1 1 0 0 1 0). (b) The number of one is 3 (odd). It is a parity error.

Hamming Code where c1, c2, and c3 denote the parity check bits.

Encoding

Example 4 Encode the word W = (1 0 1 1). Solution

Decoding

Example 5 Compute the syndrome of (a) R = (1 1 0 1 0 0 1) and (b) R = (1 0 0 1 0 1 0) Solution (a) we conclude that R is a code word. By the check bits in (1 1 0 1 0 0 1), we get the decoded message (0 0 0 1).

Example 5 (2) (b) Since S  0, the received message R is not a code word.

Example 6 Changing zero to one gives the code word C = (1 0 1 1 0 1 0). Hence the first, second, and fourth bits from C we arrive at the decoded message (1 0 1 0).

8.15 Method of Least Squares Example 2 If we have the data (1, 1), (2, 3), (3, 4), (4, 6), (5,5), we want to fit the function f(x) =ax + b. Then a + b = 1 2a + b = 3 3a + b = 4 4a + b = 6 5a + b = 5

Example 2 (2) Let we have

Example 2 (3)

Example 2 (4) We have AX = Y. Then the best solution of X will be X = (ATA)-1ATY = (1.1, 0.5)T. For this line the sum of the square error is The fit function is y = 1.1x + 0.5

Fig 8.15

8.16 Discrete Compartmental Models The General Two-Compartment Model

Fig 8.16

Discrete Compartmental Model

Fig 8.17

Example 1 See Fig 8.18. The initial amount is 100, 250, 80 for these three compartment. For Compartment 1 (C1): 20% to C2 0% to C3 then 80% to C1 For C2: 5% to C1 30% to C3 then 65% to C2 For C3: 25% to C1 0% to C3 then 75% to C3

Fig 8.18

Example 1 (2) That is, New C1 = 0.8C1 + 0.05C2 + 0.25C3 New C2 = 0.2C1 + 0.65C2 + 0C3 New C3 = 0C1 + 0.3C2 + 0.75C3 We get the transfer matrix as

Example 1 (3) Then one day later,

Note: m days later, Y = TmX0

Example 2

Example 2 (2)

Example 2 (3)

Thank You !