Download presentation
Presentation is loading. Please wait.
1
CHAPTER 8.9 ~ 8.16 Matrices
2
Contents 8.9 Power of Matrices 8.10 Orthogonal Matrices
8.11 Approximation of Eigenvalues 8.12 Diagonalization 8.13 Cryptography 8.14 An Error-Correcting Code 8.15 Method of Least Squares 8.16 Discrete Compartmental Models
3
8.9 Power of Matrices Introduction It is sometimes important to be able to quickly compute a power Am, m a positive integer, of an n × n matrix A: A2 = AA, A3 = AAA = A2A A4 = AAAA = A3A = A2A2 and so on.
4
A matrix A satisfies its own characteristic equation.
THEOREM 8.26 A matrix A satisfies its own characteristic equation. Cayley-Hamilton Theorem If the characteristic equation is then (1)
5
Matrix of Order 2 Suppose then 2 − – 2 = 0. From Theorem 8.26, A2 − A – 2I = 0 or A2 = A + 2I (2) and also A3 = A2 + 2A = 2I + 3A A4 = A3 + 2A2 = 6I + 5A A5 = 10I + 11A A6 = 22I + 21A (3)
6
From the above discussions, we can write
From the above discussions, we can write Am = c0I + c1A and m = c0 + c1 (5) Using 1 = −1 , 2 = −2, we have then (6)
7
Matrix of Order n Similar to the previous discussions, we have Am = c0I + c1A + c2A2 +…+ cn–1An–1 where ck, k = 0, 1,…, n–1, depend on m.
8
Example 1 Compute Am for Solution The characteristic equation is 3 + 2 2 + – 2 = 0, then 1 = –1, 2 = 1, 3 = 2. Thus Am = c0I + c1A +c2A2, m = c0 + c1 + c2 2 (7) In turn letting 1 = –1, 2 = 1, 3 = 2, we obtain (–1)m = c0 – c1 + c = c0 + c1 + c2 (8) m = c0 +2c1 + 4c2
9
Solving (8), Since Am = c0I + c1A +c2A2, we have eg. m = 10
10
Finding the Inverse then A2 – A – 2I = 0, I = (1/2)A2 – (1/2)A, Multiplying both sides by A–1, then A–1 = (1/2)A – (1/2)I Thus
11
8.10 Orthogonal Matrices DEFINITION 8.14 An n n matrix A is symmetric if A = AT, where AT is The transpose of A. Symmetric Matrix
12
Let A be a symmetric matrix with real entries. Then the
THEOREM 8.27 Let A be a symmetric matrix with real entries. Then the eigenvalues of A are real. Rear Eigenvalues Proof Let AK = K, then (1) Since A is real, (2)
13
Take the transpose of (2), use the fact that A is symmetric and multiply on the right by K (3) Now AK = K, we have (4) Using (4) – (3) gives (5)
14
Since we have
15
Inner Product x y = x1 y1 + x2 y2 + … + xn yn (6) Similarly X Y = XTY = x1 y1 + x2 y2 + … + xn yn (7)
16
Let A be a n × n symmetric matrix. The eigenvectors
THEOREM 8.28 Let A be a n × n symmetric matrix. The eigenvectors corresponding to distinct (different) eigenvalues are orthogonal. Orthogonal Eigenvectors Proof Let 1,, 2 be two distinct eigenvalues corresponding to eigenvectors K1 and K2. Since AK1 = 1K1 , AK2 = 2K2 (8) (AK1)T = K1TAT = K1TA = 1K1T
17
THEOREM 8.28 K1TAK2 = 1K1TK2 (9) Since AK2 = 2K2, K1TAK2 = 2K1TK2 (10) (10) – (9) gives 0 = 1K1TK2 − 2K1TK2 or 0 = (1 − 2) K1TK2 Since 1 2, then K1TK2 = 0.
18
Example 1 The matrix has = 0, 1, −2 and
19
Example 1 (2) We find
20
An n × n nonsingular matrix A is orthogonal if A-1 = AT
DEFINITION 8.15 An n × n nonsingular matrix A is orthogonal if A-1 = AT Orthogonal Matrix A is orthogonal if ATA = I.
21
Example 2 (a) I is an orthogonal matrix, since ITI = II = I
(b) So, A is orthogonal.
22
An n × n matrix A is orthogonal if and only if its column
THEOREM 8.29 An n × n matrix A is orthogonal if and only if its column X1, X2, …, Xn form an orthonormal set. Criterion for an Orthogonal Matrix Partial Proof We have A = (X1, X2, …, Xn), and A is orthogonal then
23
THEOREM 8.29 It follows that XiTXj = 0, i j , i, j =1, 2, …, n XiTXi = 1, i =1, 2, …, n Thus all Xi form an orthonormal set.
24
Consider the matrix in example 2
26
And are unit vectors:
27
Example 3 In example 1, we have Since
28
Example 3 (2) Thus, an orthonormal set is
29
Example 3 (3) We have the orthogonal matrix Please verify that PT = P-1.
30
Example 4 For the symmetric matrix We can find = −9, −9, 9. As in Sec 8.8, we have
31
Example 4 (2) From the last matrix we see Now for
32
Example 4 (3) We find K3 K1 = K3 K2 = 0, K1 K2 = – 4 0 Using Gram-Schmidt process, V1 = K Now we have an orthogonal set and we can also make them an orthonormal set as
33
Example 4 (4) Then is orthogonal.
34
8.11 Approximation of Eigenvalues
DEFINITION 8.16 Let denote the eigenvalues of an n × n matrix A. The eigenvalues is said to be the dominant eigenvalues of A if An eigenvector corresponding to is called the dominant eigenvector of A. Dominant Eigenvalue
35
Example 1 (a) The matrix has eigenvalues Since , it follows that there is dominant eigenvalue. (b) The eigenvalues of the matrix Again, the matrix has no dominant eigenvalue.
36
Power Method Look at the sequence (1) where X0 is a nonzero n1 vector that is an initial guess or approximation and A has a dominant eigenvalue. Therefore, (2)
37
Let us make some further assumptions:
Let us make some further assumptions: |1| > |2| … |n| and the corresponding eigenvectors K1, K2, …, Kn are linearly independent and can be a basis for Rn. Thus (3) here we also assume that c1 0. Since AKi = iKi , then AX0 = c1AK1 + c2AK2 + … + cnAKn becomes (4)
38
Multiplying (4) by A, (5) (6) Since |1| > |i|, i = 2, 3, …, n, as m , we have (7)
39
However, the constant multiple of an eigenvector is also an eigenvector, then
Xm = Am X0 is an approximation to a dominant eigenvector. Since AK = K, AK K= K K then (8) which is called the Rayleigh quotient.
40
Example 2 For the initial guess
41
Example 2 (2) i 3 4 5 6 7 Xi We have It appears then that the vectors are approaching scalar multiples of
42
Example 2 (3)
43
The remainder of this section is neglected since it is of less importance.
44
Scaling
45
Example 3 Repeat the iterations of Examples 2 using scaled-down vectors. Solution From
46
Example 3 (2) We defined We continuous in this manner to construct the following table: In contrast to the table in Example 3, it is apparent from this table that the vectors are approaching i 3 4 5 6 7 Xi
47
Method of Deflation The Procedure we shall consider next is a modification of the power method and is called the method of deflation. We will limit the discussion to the case where A is a symmetric matrix. Suppose 1 and K1 are the dominant eigenvalue and a corresponding normalized eigenvector of a symmetric matrix A. Furthermore, suppose the eigenvalues of A are such that It can be proved that the matrix
48
8.12 Diagonalization Diagonalizable Matrices If there exist a matrix P, such that P-1AP = D is diagonal, then A is said to be diagonalizable. If an n × n matrix A has n linearly independent Eigenvectors K1, K2, …, Kn, then A is diagonalizable. THEOREM 8.30 Sufficient Condition for Diagonalizability
49
THEOREM 8.30 Proof Since P = (K1, K2, K3) is nonsingular, then P-1 exists, and Thus, P-1AP = D
50
If an n n matrix A is a diagonalizable of and only if
THEOREM 8.31 If an n n matrix A is a diagonalizable of and only if A has n linearly independent eigenvalues. Criterion for Diagonalizability If an n n matrix A has n distinct eigenvalues, it is aiagonalizable. THEOREM 8.32 Sufficient Condition for Diagonalizability
51
Example 1 Diagonalize Solution = 1, 4. Using the same process, we have then
52
Example 2 Consider We have
53
Example 2 (2) Now
54
Example 2 (3) Thus, P-1AP = D.
55
Example 3 Consider We have = 5, 5. Since we can only find a single eigenvector this matrix can not be diagonalizable.
56
Example 4 Consider We have = −1, 1, 1. For = −1,
For = 1, Use Gauss-Jordan method
57
Example 4 (2) We can have Since we have three linearly independent eigenvectors, A is diagonalizable. Let then P-1AP = D.
58
Example 4 (3) Since we have three linearly independent eigenvectors, A is diagonalizable. Let then P-1AP = D.
59
Orthogonally Diagonalizable
There exists an orthogonal matrix P, which can diagonalize A. Then A is said to be orthogonally diagonalizable. An n n matrix A can be orthogonally diagonalizable If and only if A is symmetric. THEOREM 8.33 Criterion for Orthogonal Diagonalizability
60
THEOREM 8.33 Partial Proof Assume an nn matrix A can be orthogonally diagonalizable, then there exits an orthogonal matrix P such that P-1AP = D. A = PDP-1. Since P is orthogonal, P-1 = PT, then A = PDPT. However, A = (PDPT)T = PDTPT = PDPT = AT Thus A is symmetric.
61
Example 5 Consider From Example 4 of Sec 8.8, we find However, they are not mutually orthogonal.
62
Example 5 (2) Now redo for = We have k1 + k2 + k3 = 0, choosing k2 = 1, k3 = 0, we get K2; choosing k2 = 0, k3 = 1, we get K3. If we choose them by another way: k2 = 1, k3 = 1 and k2 = 1, k3 = – 1.
63
Example 5 (3) We obtain two entirely different but orthogonal Thus an orthogonal set is
64
Example 5 (4) Since we obtain an orthonormal set.
65
Example 5 (5) Then and D = P-1AP
66
Example 5 (6) This is verified form
67
Quadratic Forms An algebraic expression of the form ax2 + bxy + cy2 (4) is called a quadratic form. If we let then (4) can be written as (5) Note: is symmetric.
68
Example 6 Identify the conic section whose equation is 2x2 + 4xy − y2 = 1 Solution From (5) we have or XTAX = 1 (6) where
69
Example 6 (2) For A, we have and K1, K2 are orthogonal. Moreover, an orthonormal set is
70
Example 6 (3) Hence we have the orthogonal matrix If we let X = PX’ where , then (7)
71
Example 6 (4) Using (7), (6) becomes or – 2X2 + 3Y2 = See Fig 8.11
72
Fig 8.11
73
8.13 Cryptography Introduction Secret writing means code.
A simple code Let the letters a, b, c, …., z be represented by the numbers 1, 2, 3, …, 26. A sequence of letters cab then be a sequence of numbers. Arrange these numbers into an m n matrix M. Then we select a nonsingular m m matrix A. The new sent message becomes Y = AM, then M = A-1Y.
74
8.14 An Error Correcting Code
Parity Encoding Add an extra bit to make the number of one is even
75
Example 2 (a) W = (1 0 0 0 1 1) (b) W = (1 1 1 0 0 1) Solution
(a) The extra bit will be 1 to make the number of one is 4 (even). The code word is then C = ( ). (b) The extra bit will be 0 to make the number of one is 4 (even). So the edcoded word is C = ( ).
76
Fig 8.12
77
Example 3 Decoding the following (a) R = ( ) (b) R = ( ) Solution (a) The number of one is 4 (even), we just drop the last bit to get ( ). (b) The number of one is 3 (odd). It is a parity error.
78
Hamming Code where c1, c2, and c3 denote the parity check bits.
79
Encoding
80
Example 4 Encode the word W = ( ). Solution
81
Decoding
82
Example 5 Compute the syndrome of (a) R = ( ) and (b) R = ( ) Solution (a) we conclude that R is a code word. By the check bits in ( ), we get the decoded message ( ).
83
Example 5 (2) (b) Since S 0, the received message R is not a code word.
86
Example 6 Changing zero to one gives the code word C = ( ). Hence the first, second, and fourth bits from C we arrive at the decoded message ( ).
87
8.15 Method of Least Squares
Example 2 If we have the data (1, 1), (2, 3), (3, 4), (4, 6), (5,5), we want to fit the function f(x) =ax + b. Then a + b = 1 2a + b = 3 3a + b = a + b = a + b = 5
88
Example 2 (2) Let we have
89
Example 2 (3)
90
Example 2 (4) We have AX = Y. Then the best solution of X will be X = (ATA)-1ATY = (1.1, 0.5)T. For this line the sum of the square error is The fit function is y = 1.1x + 0.5
91
Fig 8.15
92
8.16 Discrete Compartmental Models
The General Two-Compartment Model
93
Fig 8.16
94
Discrete Compartmental Model
96
Fig 8.17
97
Example 1 See Fig The initial amount is 100, 250, 80 for these three compartment. For Compartment 1 (C1): % to C2 0% to C3 then 80% to C1 For C2: 5% to C % to C3 then 65% to C2 For C3: 25% to C1 0% to C3 then 75% to C3
98
Fig 8.18
99
Example 1 (2) That is, New C1 = 0.8C C C3 New C2 = 0.2C C2 + 0C3 New C3 = 0C C C3 We get the transfer matrix as
100
Example 1 (3) Then one day later,
101
Note: m days later, Y = TmX0
102
Example 2
103
Example 2 (2)
104
Example 2 (3)
105
Thank You !
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.