Download presentation
Presentation is loading. Please wait.
1
8.13 Cryptography Introduction Secret writing means code. A simple code Let the letters a, b, c, …., z be represented by the numbers 1, 2, 3, …, 26. A sequence of letters cab then be a sequence of numbers. Arrange these numbers into an m n matrix M. Then we select a nonsingular m m matrix A. The new sent message becomes Y = AM, then M = A -1 Y.
2
8.14 An Error Correcting Code Parity Encoding Add an extra bit to make the number of one is even
3
Example 2 (a) W = (1 0 0 0 1 1) (b) W = (1 1 1 0 0 1) Solution (a) The extra bit will be 1 to make the number of one is 4 (even). The code word is then C = (1 0 0 0 1 1 1). (b) The extra bit will be 0 to make the number of one is 4 (even). So the edcoded word is C = (1 1 1 0 0 1 0).
4
Fig 8.12
5
Example 3 Decoding the following (a) R = (1 1 0 0 1 0 1) (b) R = (1 0 1 1 0 0 0) Solution (a) The number of one is 4 (even), we just drop the last bit to get (1 1 0 0 1 0). (b) The number of one is 3 (odd). It is a parity error.
6
Hamming Code where c 1, c 2, and c 3 denote the parity check bits.
7
Encoding
8
Example 4 Encode the word W = (1 0 1 1). Solution
9
Decoding
10
Example 5 Compute the syndrome of (a) R = (1 1 0 1 0 0 1) and (b) R = (1 0 0 1 0 1 0) Solution (a) we conclude that R is a code word. By the check bits in (1 1 0 1 0 0 1), we get the decoded message (0 0 0 1).
11
Example 5 (2) (b) Since S 0, the received message R is not a code word.
14
Example 6 Changing zero to one gives the code word C = (1 0 1 1 0 1 0). Hence the first, second, and fourth bits from C we arrive at the decoded message (1 0 1 0).
15
8.15 Method of Least Squares Example 2 If we have the data (1, 1), (2, 3), (3, 4), (4, 6), (5,5), we want to fit the function f(x) =ax + b. Then a + b = 1 2a + b = 3 3a + b = 4 4a + b = 6 5a + b = 5
16
Example 2 (2) Let we have
17
Example 2 (3)
18
Example 2 (4) We have AX = Y. Then the best solution of X will be X = (A T A) -1 A T Y = (1.1, 0.5) T. For this line the sum of the square error is The fit function is y = 1.1x + 0.5
19
Fig 8.15
20
8.16 Discrete Compartmental Models The General Two-Compartment Model
21
Fig 8.16
22
Discrete Compartmental Model
24
Fig 8.17
25
Example 1 See Fig 8.18. The initial amount is 100, 250, 80 for these three compartment. For Compartment 1 (C1): 20% to C2 0% to C3 then80% to C1 For C2: 5% to C1 30% to C3then65% to C2 For C3: 25% to C1 0% to C3then75% to C3
26
Fig 8.18
27
That is, New C1 = 0.8C1 + 0.05C2 + 0.25C3 New C2 = 0.2C1 + 0.65C2 + 0C3 New C3 = 0C1 + 0.3C2 + 0.75C3 We get the transfer matrix as Example 1 (2)
28
Example 1 (3) Then one day later,
29
Note: m days later, Y = T m X 0
30
Example 2
31
Example 2 (2)
32
Example 2 (3)
33
Para la matriz sim é trica: tenemos = −9, −9, 9. Recuerda que si A es una matriz n × n simétrica, los autovectores correspondientes a distintos (diferentes) autovalores son ortogonales.
34
Observa que: K 3 K 1 = K 3 K 2 = 0, K 1 K 2 = – 4 0 Usando el m é todo de Gram-Schmidt: V 1 = K 1 Ahora si que tenemos un conjunto ortogonal y podemos normalizarlo: Ortogonal
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.