Presentation is loading. Please wait.

Presentation is loading. Please wait.

Matrices CHAPTER 8.9 ~ 8.16. Ch8.9-8.16_2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 

Similar presentations


Presentation on theme: "Matrices CHAPTER 8.9 ~ 8.16. Ch8.9-8.16_2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices "— Presentation transcript:

1 Matrices CHAPTER 8.9 ~ 8.16

2 Ch8.9-8.16_2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices  8.11 Approximation of Eigenvalues 8.11 Approximation of Eigenvalues  8.12 Diagonalization 8.12 Diagonalization  8.13 Cryptography 8.13 Cryptography  8.14 An Error-Correcting Code 8.14 An Error-Correcting Code  8.15 Method of Least Squares 8.15 Method of Least Squares  8.16 Discrete Compartmental Models 8.16 Discrete Compartmental Models

3 Ch8.9-8.16_3 8.9 Power of Matrices  Introduction It is sometimes important to be able to quickly compute a power A m, m a positive integer, of an n × n matrix A: A 2 = AA, A 3 = AAA = A 2 A A 4 = AAAA = A 3 A = A 2 A 2 and so on.

4 Ch8.9-8.16_4  If the characteristic equation is then (1) A matrix A satisfies its own characteristic equation. THEOREM 8.26 Cayley-Hamilton Theorem

5 Ch8.9-8.16_5 Matrix of Order 2  Suppose then 2 − – 2 = 0. From Theorem 8.26, A 2 − A – 2I = 0 or A 2 = A + 2I (2) and also A 3 = A 2 + 2A = 2I + 3A A 4 = A 3 + 2A 2 = 6I + 5A A 5 = 10I + 11A A 6 = 22I + 21A(3)

6 Ch8.9-8.16_6  From the above discussions, we can write A m = c 0 I + c 1 A and m = c 0 + c 1 (5) Using 1 = −1, 2 = −2, we have then (6)

7 Ch8.9-8.16_7 Matrix of Order n  Similar to the previous discussions, we have A m = c 0 I + c 1 A + c 2 A 2 +…+ c n–1 A n–1 where c k, k = 0, 1,…, n–1, depend on m.

8 Ch8.9-8.16_8 Example 1 Compute A m for Solution The characteristic equation is 3 + 2 2 + – 2 = 0, then 1 = –1, 2 = 1, 3 = 2. Thus A m = c 0 I + c 1 A +c 2 A 2, m = c 0 + c 1 + c 2 2 (7) In turn letting 1 = –1, 2 = 1, 3 = 2, we obtain (–1) m = c 0 – c 1 + c 2 1 = c 0 + c 1 + c 2 (8) 2 m = c 0 +2c 1 + 4c 2

9 Ch8.9-8.16_9 Solving (8), Since A m = c 0 I + c 1 A +c 2 A 2, we have eg. m = 10

10 Ch8.9-8.16_10 Finding the Inverse  then A 2 – A – 2I = 0, I = (1/2)A 2 – (1/2)A, Multiplying both sides by A –1, then A –1 = (1/2)A – (1/2)I Thus

11 Ch8.9-8.16_11 8.10 Orthogonal Matrices An n  n matrix A is symmetric if A = A T, where A T is The transpose of A. DEFINITION 8.14 Symmetric Matrix

12 Ch8.9-8.16_12 Proof Let AK = K, then (1) Since A is real, (2) Let A be a symmetric matrix with real entries. Then the eigenvalues of A are real. THEOREM 8.27 Rear Eigenvalues

13 Ch8.9-8.16_13 Take the transpose of (2), use the fact that A is symmetric and multiply on the right by K (3) Now AK = K, we have (4) Using (4) – (3) gives (5)

14 Ch8.9-8.16_14 Since we have

15 Ch8.9-8.16_15 Inner Product  x  y = x 1 y 1 + x 2 y 2 + … + x n y n (6) Similarly X  Y = X T Y = x 1 y 1 + x 2 y 2 + … + x n y n (7)

16 Ch8.9-8.16_16 Proof Let 1,, 2 be two distinct eigenvalues corresponding to eigenvectors K 1 and K 2. Since AK 1 = 1 K 1, AK 2 = 2 K 2 (8) (AK 1 ) T = K 1 T A T = K 1 T A = 1 K 1 T Let A be a n × n symmetric matrix. The eigenvectors corresponding to distinct (different) eigenvalues are orthogonal. THEOREM 8.28 Orthogonal Eigenvectors

17 Ch8.9-8.16_17 THEOREM 8.28 K 1 T AK 2 = 1 K 1 T K 2 (9) Since AK 2 = 2 K 2, K 1 T AK 2 = 2 K 1 T K 2 (10) (10) – (9) gives 0 = 1 K 1 T K 2 − 2 K 1 T K 2 or 0 = ( 1 − 2 ) K 1 T K 2 Since 1  2, then K 1 T K 2 = 0.

18 Ch8.9-8.16_18 Example 1 The matrix has = 0, 1, −2 and

19 Ch8.9-8.16_19 Example 1 (2) We find

20 Ch8.9-8.16_20  A is orthogonal if A T A = I. An n × n nonsingular matrix A is orthogonal if A -1 = A T DEFINITION 8.15 Orthogonal Matrix

21 Ch8.9-8.16_21 Example 2  (a) I is an orthogonal matrix, since I T I = II = I  (b) So, A is orthogonal.

22 Ch8.9-8.16_22 Partial Proof We have A = (X 1, X 2, …, X n ), and A is orthogonal then An n × n matrix A is orthogonal if and only if its column X 1, X 2, …, X n form an orthonormal set. THEOREM 8.29 Criterion for an Orthogonal Matrix

23 Ch8.9-8.16_23 THEOREM 8.29 It follows that X i T X j = 0, i  j, i, j =1, 2, …, n X i T X i = 1, i =1, 2, …, n Thus all X i form an orthonormal set.

24 Ch8.9-8.16_24  Consider the matrix in example 2

25 Ch8.9-8.16_25

26 Ch8.9-8.16_26 And are unit vectors:

27 Ch8.9-8.16_27 Example 3 In example 1, we have Since

28 Ch8.9-8.16_28 Example 3 (2) Thus, an orthonormal set is

29 Ch8.9-8.16_29 Example 3 (3) We have the orthogonal matrix Please verify that P T = P -1.

30 Ch8.9-8.16_30 Example 4 For the symmetric matrix We can find = −9, −9, 9. As in Sec 8.8, we have

31 Ch8.9-8.16_31 Example 4 (2)  From the last matrix we see  Now for

32 Ch8.9-8.16_32 Example 4 (3) We find K 3  K 1 = K 3  K 2 = 0, K 1  K 2 = – 4  0 Using Gram-Schmidt process, V 1 = K 1 Now we have an orthogonal set and we can also make them an orthonormal set as

33 Ch8.9-8.16_33 Example 4 (4) Then is orthogonal.

34 Ch8.9-8.16_34 8.11 Approximation of Eigenvalues Let denote the eigenvalues of an n × n matrix A. The eigenvalues is said to be the dominant eigenvalues of A if An eigenvector corresponding to is called the dominant eigenvector of A. DEFINITION 8.16 Dominant Eigenvalue

35 Ch8.9-8.16_35 Example 1  (a) The matrix has eigenvalues. Since, it follows that there is dominant eigenvalue.  (b) The eigenvalues of the matrix Again, the matrix has no dominant eigenvalue.

36 Ch8.9-8.16_36 Power Method  Look at the sequence (1) where X 0 is a nonzero n  1 vector that is an initial guess or approximation and A has a dominant eigenvalue.  Therefore, (2)

37 Ch8.9-8.16_37  Let us make some further assumptions: | 1 | > | 2 |  …  | n | and the corresponding eigenvectors K 1, K 2, …, K n are linearly independent and can be a basis for R n. Thus (3) here we also assume that c 1  0.  Since AK i = i K i, then AX 0 = c 1 AK 1 + c 2 AK 2 + … + c n AK n becomes (4)

38 Ch8.9-8.16_38  Multiplying (4) by A, (5) (6) Since | 1 | > | i |, i = 2, 3, …, n, as m  , we have (7)

39 Ch8.9-8.16_39  However, the constant multiple of an eigenvector is also an eigenvector, then X m = A m X 0 is an approximation to a dominant eigenvector. Since AK = K, AK  K= K  K then (8) which is called the Rayleigh quotient.

40 Ch8.9-8.16_40 Example 2  For the initial guess

41 Ch8.9-8.16_41 Example 2 (2)  We have It appears then that the vectors are approaching scalar multiples of i34567 XiXi

42 Ch8.9-8.16_42 Example 2 (3)

43 Ch8.9-8.16_43  The remainder of this section is neglected since it is of less importance.

44 Ch8.9-8.16_44 Scaling

45 Ch8.9-8.16_45 Example 3 Repeat the iterations of Examples 2 using scaled-down vectors. Solution From

46 Ch8.9-8.16_46 Example 3 (2) We defined We continuous in this manner to construct the following table: In contrast to the table in Example 3, it is apparent from this table that the vectors are approaching i34567 XiXi

47 Ch8.9-8.16_47 Method of Deflation  The Procedure we shall consider next is a modification of the power method and is called the method of deflation. We will limit the discussion to the case where A is a symmetric matrix.  Suppose 1 and K 1 are the dominant eigenvalue and a corresponding normalized eigenvector of a symmetric matrix A. Furthermore, suppose the eigenvalues of A are such that It can be proved that the matrix

48 Ch8.9-8.16_48 8.12 Diagonalization  Diagonalizable Matrices If there exist a matrix P, such that P -1 AP = D is diagonal, then A is said to be diagonalizable. If an n × n matrix A has n linearly independent Eigenvectors K 1, K 2, …, K n, then A is diagonalizable. THEOREM 8.30 Sufficient Condition for Diagonalizability

49 Ch8.9-8.16_49 THEOREM 8.30  Proof Since P = (K 1, K 2, K 3 ) is nonsingular, then P -1 exists, and Thus, P -1 AP = D

50 Ch8.9-8.16_50 If an n  n matrix A is a diagonalizable of and only if A has n linearly independent eigenvalues. THEOREM 8.31 Criterion for Diagonalizability If an n  n matrix A has n distinct eigenvalues, it is aiagonalizable. THEOREM 8.32 Sufficient Condition for Diagonalizability

51 Ch8.9-8.16_51 Example 1 Diagonalize Solution = 1, 4. Using the same process, we have then

52 Ch8.9-8.16_52 Example 2 Consider We have

53 Ch8.9-8.16_53 Example 2 (2) Now

54 Ch8.9-8.16_54 Example 2 (3) Thus, P -1 AP = D.

55 Ch8.9-8.16_55 Example 3 Consider We have = 5, 5. Since we can only find a single eigenvector this matrix can not be diagonalizable.

56 Ch8.9-8.16_56 Example 4 Consider We have = −1, 1, 1. For = −1, For = 1, Use Gauss-Jordan method

57 Ch8.9-8.16_57 Example 4 (2) We can have Since we have three linearly independent eigenvectors, A is diagonalizable. Let then P -1 AP = D.

58 Ch8.9-8.16_58 Example 4 (3) Since we have three linearly independent eigenvectors, A is diagonalizable. Let then P -1 AP = D.

59 Ch8.9-8.16_59 Orthogonally Diagonalizable  There exists an orthogonal matrix P, which can diagonalize A. Then A is said to be orthogonally diagonalizable. An n  n matrix A can be orthogonally diagonalizable If and only if A is symmetric. THEOREM 8.33 Criterion for Orthogonal Diagonalizability

60 Ch8.9-8.16_60 THEOREM 8.33  Partial Proof Assume an n  n matrix A can be orthogonally diagonalizable, then there exits an orthogonal matrix P such that P -1 AP = D. A = PDP -1. Since P is orthogonal, P -1 = P T, then A = PDP T. However, A = (PDP T ) T = PD T P T = PDP T = A T Thus A is symmetric.

61 Ch8.9-8.16_61 Example 5 Consider From Example 4 of Sec 8.8, we find However, they are not mutually orthogonal.

62 Ch8.9-8.16_62 Example 5 (2) Now redo for = 8 We have k 1 + k 2 + k 3 = 0, choosing k 2 = 1, k 3 = 0, we get K 2 ; choosing k 2 = 0, k 3 = 1, we get K 3. If we choose them by another way: k 2 = 1, k 3 = 1 and k 2 = 1, k 3 = – 1.

63 Ch8.9-8.16_63 Example 5 (3) We obtain two entirely different but orthogonal Thus an orthogonal set is

64 Ch8.9-8.16_64 Example 5 (4) Since we obtain an orthonormal set.

65 Ch8.9-8.16_65 Example 5 (5) Then and D = P -1 AP

66 Ch8.9-8.16_66 Example 5 (6) This is verified form

67 Ch8.9-8.16_67 Quadratic Forms  An algebraic expression of the form ax 2 + bxy + cy 2 (4) is called a quadratic form. If we let then (4) can be written as (5)  Note: is symmetric.

68 Ch8.9-8.16_68 Example 6 Identify the conic section whose equation is 2x 2 + 4xy − y 2 = 1 Solution From (5) we have or X T AX = 1(6) where

69 Ch8.9-8.16_69 Example 6 (2) For A, we have and K 1, K 2 are orthogonal. Moreover, an orthonormal set is

70 Ch8.9-8.16_70 Example 6 (3) Hence we have the orthogonal matrix If we let X = PX’ where, then (7)

71 Ch8.9-8.16_71 Example 6 (4) Using (7), (6) becomes or – 2X 2 + 3Y 2 = 1. See Fig 8.11

72 Ch8.9-8.16_72 Fig 8.11

73 Ch8.9-8.16_73 8.13 Cryptography  Introduction Secret writing means code.  A simple code Let the letters a, b, c, …., z be represented by the numbers 1, 2, 3, …, 26. A sequence of letters cab then be a sequence of numbers. Arrange these numbers into an m  n matrix M. Then we select a nonsingular m  m matrix A. The new sent message becomes Y = AM, then M = A -1 Y.

74 Ch8.9-8.16_74 8.14 An Error Correcting Code  Parity Encoding Add an extra bit to make the number of one is even

75 Ch8.9-8.16_75 Example 2 (a) W = (1 0 0 0 1 1) (b) W = (1 1 1 0 0 1) Solution (a) The extra bit will be 1 to make the number of one is 4 (even). The code word is then C = (1 0 0 0 1 1 1). (b) The extra bit will be 0 to make the number of one is 4 (even). So the edcoded word is C = (1 1 1 0 0 1 0).

76 Ch8.9-8.16_76 Fig 8.12

77 Ch8.9-8.16_77 Example 3 Decoding the following (a) R = (1 1 0 0 1 0 1) (b) R = (1 0 1 1 0 0 0) Solution (a) The number of one is 4 (even), we just drop the last bit to get (1 1 0 0 1 0). (b) The number of one is 3 (odd). It is a parity error.

78 Ch8.9-8.16_78 Hamming Code where c 1, c 2, and c 3 denote the parity check bits.

79 Ch8.9-8.16_79 Encoding

80 Ch8.9-8.16_80 Example 4 Encode the word W = (1 0 1 1). Solution

81 Ch8.9-8.16_81 Decoding

82 Ch8.9-8.16_82 Example 5 Compute the syndrome of (a) R = (1 1 0 1 0 0 1) and (b) R = (1 0 0 1 0 1 0) Solution (a) we conclude that R is a code word. By the check bits in (1 1 0 1 0 0 1), we get the decoded message (0 0 0 1).

83 Ch8.9-8.16_83 Example 5 (2) (b) Since S  0, the received message R is not a code word.

84 Ch8.9-8.16_84

85 Ch8.9-8.16_85

86 Ch8.9-8.16_86 Example 6 Changing zero to one gives the code word C = (1 0 1 1 0 1 0). Hence the first, second, and fourth bits from C we arrive at the decoded message (1 0 1 0).

87 Ch8.9-8.16_87 8.15 Method of Least Squares  Example 2 If we have the data (1, 1), (2, 3), (3, 4), (4, 6), (5,5), we want to fit the function f(x) =ax + b. Then a + b = 1 2a + b = 3 3a + b = 4 4a + b = 6 5a + b = 5

88 Ch8.9-8.16_88 Example 2 (2) Let we have

89 Ch8.9-8.16_89 Example 2 (3)

90 Ch8.9-8.16_90 Example 2 (4) We have AX = Y. Then the best solution of X will be X = (A T A) -1 A T Y = (1.1, 0.5) T. For this line the sum of the square error is The fit function is y = 1.1x + 0.5

91 Ch8.9-8.16_91 Fig 8.15

92 Ch8.9-8.16_92 8.16 Discrete Compartmental Models  The General Two-Compartment Model

93 Ch8.9-8.16_93 Fig 8.16

94 Ch8.9-8.16_94 Discrete Compartmental Model

95 Ch8.9-8.16_95

96 Ch8.9-8.16_96 Fig 8.17

97 Ch8.9-8.16_97 Example 1  See Fig 8.18. The initial amount is 100, 250, 80 for these three compartment. For Compartment 1 (C1): 20% to C2 0% to C3 then80% to C1 For C2: 5% to C1 30% to C3then65% to C2 For C3: 25% to C1 0% to C3then75% to C3

98 Ch8.9-8.16_98 Fig 8.18

99 Ch8.9-8.16_99 That is, New C1 = 0.8C1 + 0.05C2 + 0.25C3 New C2 = 0.2C1 + 0.65C2 + 0C3 New C3 = 0C1 + 0.3C2 + 0.75C3 We get the transfer matrix as Example 1 (2)

100 Ch8.9-8.16_100 Example 1 (3) Then one day later,

101 Ch8.9-8.16_101  Note: m days later, Y = T m X 0

102 Ch8.9-8.16_102 Example 2

103 Ch8.9-8.16_103 Example 2 (2)

104 Ch8.9-8.16_104 Example 2 (3)

105


Download ppt "Matrices CHAPTER 8.9 ~ 8.16. Ch8.9-8.16_2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices "

Similar presentations


Ads by Google