Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exercise in the previous class (Apr. 26) For s = 110010111110001000110100101011101100 (|s|=36), compute  2 -values of s for block length with 1, 2, 3.

Similar presentations


Presentation on theme: "Exercise in the previous class (Apr. 26) For s = 110010111110001000110100101011101100 (|s|=36), compute  2 -values of s for block length with 1, 2, 3."— Presentation transcript:

1 exercise in the previous class (Apr. 26) For s = 110010111110001000110100101011101100 (|s|=36), compute  2 -values of s for block length with 1, 2, 3 and 4. n = 1: 0 and 1 are expected to appear 18 times each. s contains |0| =17, |1| = 19,  2 = 1 2 /18+ 1 2 /18=1/9 n = 2: four patterns × 4.5 times. s contains |00| =5, |01| = 1, |10| =6, |11| = 6,  2 = 0.5 2 /4.5 + 3.5 2 /4.5 + 1.5 2 /4.5 + 1.5 2 /4.5 = 3.78 n = 3:  2 = 2.67, n=4:  2 = 14.11 Implement the linear congruent method as computer program.  see the excel file on http://apal.naist.jp/~kaji/lecture/ 1

2 chapter 3: coding for noisy communication 2

3 about this chapter coding techniques for finding and correcting errors coding in the Chapter 2... source coding ( 情報源符号化 ) work next to the information source give a compact representation coding in this chapter... channel coding ( 通信路符号化 ) work next to the communication channel give a representation which is robust against errors 3

4 4 two codings channel source and receiver senderreceiver source coding encodedecode protection (optional) encryptdecrypt channel coding encodedecode

5 today’s class motivation and overview rephrase of the first day introduction the models of communication channels binary symmetric channel (BSC) elementary components for linear codes (even) parity check code horizontal and vertical parity check code 5

6 communication is erroneous real communication no guarantee of “sent information = received information” radio (wireless) communication noise alters the signal waveform optical disk medium (CD, DVD...) dirt and scratch obstructs correct reading the difference between sent and received information = error Errors may be reduced, but cannot be eliminated. 6

7 error correction in daily life Three types of errors in our daily life: correctable: “take a train for Ikomo”  “... Ikoma” detectable: “He is from Naga city”  “Nara” or “Naha” undetectable: “the zip code is 6300102”  ??? What makes the difference? 7 Ikoma Nara Naha Ikomo Naga names of places... sparse ( 疎 ) in the set zipcodes... densely ( 密 ) packed 6300101 6300102 6300103 :

8 trick of error correction and detection For the error correction, we need to create sparseness artificially. ( 人工的に ) phonetic code: Alpha, Bravo, Charlie... あさひの「あ」,いろはの「い」... 8 ABC DEF GHI characters... densely packed phonetic codes... sparse Alpha Bravo To create the sparseness, we... add redundant sequences, enlarge the space, and keep representations apart.

9 in a binary world assume that you want to send two binary bits without redundancy... 9 with redundancy... Even if you receive “01”, you cannot say if... “01” is the original data, or result of the modification by errors. 0001 1011 00 → 00000 01 → 01011 10 → 10101 11 → 11110 00000 01011 01010 00011 00010 encode: to add the redundancy codewords: results of encoding code: the set of codewords

10 the principle of error correction the sender and the receiver agree the code sender: choose one codeword c and send it receiver: try to estimate the sent codeword c’ from a received data r which can contain errors assumption: Errors do not occur so frequently. with this assumption, error correction ≈ search of the codeword c’ nearest to r 10 00000 01011 00010

11 is it really good? assume that a symbol is delivered correctly with p=0.9 11 00 → 00000 01 → 01011 10 → 10101 11 → 11110 without coding correct incorrect 0.9 2 = 0.81 1 – 0.81 = 0.19 with coding correct = 0 or 1 error in the five bits 0.9 5 + 5 C 1 0.9 4 ・ 0.1 = 0.9072 incorrect1 – 0.9072 = 0.0928 Good for this case, but not always...  construction of good codes is the main subject of this chapter

12 the model of channels communication channel probabilistic model with one input and one output the inputs and outputs are symbols (discrete channel) an output symbol is generated for each input symbol no lost, no delay the output symbol is chosen probabilistically 12 channel input (sender) output (receiver)

13 example of channels: 1 binary symmetric channel (BSC, 二元対称通信路 ) inputs = outputs = {0, 1} input = 0  output = 0 with prob. 1 – p, 1 with prob. p. input = 1  output = 0 with prob. p, 1 with prob. 1 – p. p is said to be a bit error probability of the channel 13 0 1 0 1 p 1 – p p input output BSC is... memoryless ( 記憶がない ) errors occur independently stable ( 定常 ) the probability p does not change

14 example of channels: 2 binary erasure channel (BEC, 二元消失通信路 ) inputs = {0, 1}, outputs = {0, 1, X} input = 0  output = 0 with prob. 1 – p, X with prob. p. input = 1  output = 1 with prob. 1 – p, X with prob. p. X is a “place holder” of the erased symbol 14 0 1 0 1 p 1 – p p input output X 0 1 0 1 p 1 – p – q p X q q (another variation)

15 example of channels: 3 channels with memory there is correlation between occurrences of errors a channel with “burst errors” 15 011010101010010101011010110101010 011010101011011010101010110101010 unstable channels the probabilistic behavior changes according to time long-range radio communication, etc.

16 channel coding: preliminary We will consider channel coding such that... a binary sequence (vector) of length k is encoded into a binary sequence (vector, codeword) of length n. k < n, code C  V n a sequence is sometimes written as a tuple b 1 b 2...b m = (b 1, b 2,..., b m ) computation is binary, component-wise: 001 + 101 = 100 16 VkVk VnVn

17 good code? 17 VkVk VnVn 00 01 10 11 000 011 101 110 C the class of linear codes ( 線形符号 )

18 linear codes easy encoding (relatively) easy decoding (error detection/correction) mathematics help constructing good codes the performance evaluation is not too difficult Most codes used today are linear codes. We study some examples of linear codes (today), and learn the general definition of linear codes (next). 18

19 (even) parity code the encoding of an (even) parity code ( 偶パリティ符号 ): given a vector (a 1, a 2,..., a k ) ∈ V k compute p = a 1 + a 2 +... + a k, and let (a 1, a 2,..., a k, p) be the codeword of (a 1, a 2,..., a k ) 19 when k = 3... 000 001 010 011 100 101 110 111 0000 0011 0101 0110 1001 1010 1100 1111 p = 0 + 1 + 1 = 0 code C

20 basic property of the parity code code length: n = k + 1 a codeword consists of... the original data itself (information symbols) added redundant symbol (parity symbol)  systematic code ( 組織符号 ) 20 0000 0011 0101 0110 1001 1010 1100 1111 code C 1010 information symbols (bits) parity symbol (bit) a vector v of length n is codeword  v contains even number of 1s.

21 parity codes and errors Even parity codes cannot correct errors. Even parity codes can detect odd number of errors. 21 #errors = #differences between the sent and received vectors 0000 sent vector (codeword) 0101 received vector we have two (2) errors 0000 0011 0101 0001 1001 #errors = even  the received vector contains even1s #errors = odd  the received vector contains odd 1s

22 horizontal and vertical parity check code horizontal and vertical parity check code (2D parity code) place information symbols in a rectangular form ( 長方形 ) add parity symbols in the horizontal and vertical directions reorder all symbols into a vector (codeword) 22 k = 9, encode (a 1, a 2,..., a 9 ) a1a1 a2a2 a3a3 a4a4 a5a5 a6a6 a7a7 a8a8 a9a9 p1p1 p2p2 p3p3 q1q1 q2q2 q3q3 r codeword: (a 1, a 2,..., a 9, p 1, p 2, p 3, q 1, q 2, q 3, r) p 1 = a 1 + a 2 + a 3 p 2 = a 4 + a 5 + a 6 p 3 = a 7 + a 8 + a 9 q 1 = a 1 + a 4 + a 7 q 2 = a 2 + a 5 + a 8 q 3 = a 3 + a 6 + a 9 r = a 1 + a 2 +... + a 9

23 example of encoding encode 011100101, and the codeword is 0111001010100101 23 2D codes are systematic codes if k = ab, then the code length is n = ab + a + b + 1 0 1 1 0 1 0 0 1 1 0 0 1 0111001010100101 information symbols parity symbols

24 2D codes and errors A 2D code can correct one-bit error in a codeword. place the symbols in a received vector count the numbers of “1” in each row/column if there is no error... all rows and columns contain even 1s if there is an error... there is one row and one column with odd number of 1s the intersecting point is affected by an error 24 010110101 0 0 11 1 1 0 01 even odd even 011110101 (received) correct the third bit

25 two-bit errors What happens if two-bits are affected by errors? 25 real errors two strange rows two strange columns we cannot decide which has happened... We know something wrong, but cannot spot the errors.

26 two-bit errors, another case 26 real errors no strange row two strange columns We know something wrong, but cannot spot the errors.

27 additional remark Do we need the parity of parity? 27 a1a1 a2a2 a3a3 a4a4 a5a5 a6a6 a7a7 a8a8 a9a9 p1p1 p2p2 p3p3 q1q1 q2q2 q3q3 r codeword: (a 1, a 2,..., a 9, p 1, p 2, p 3, q 1, q 2, q 3, r) Even if we don’t have r, we can correct any one-bit error, but... some two-bit errors cause a problem.

28 additional remark (cnt’d) We expect that 2D codes detect all two-bit errors. If we don’t use the parity of parity, then... 28 000 000 000 000 011 010 codeword 000 010 000 000 010 010 1-bit err. to the nearest codeword 1-bit err. some two-bit errors are not detected, instead, they are decoded to a wrong codeword. 000 010 000 to the nearest codeword

29 summary of today’s class motivation and overview rephrase of the first day introduction the models of communication channels binary symmetric channel (BSC) elementary components for linear codes (even) parity check code horizontal and vertical parity check code 29

30 excersise In the example of page 11, determine the range of the probability p under which the encoding results in better performance. 30


Download ppt "Exercise in the previous class (Apr. 26) For s = 110010111110001000110100101011101100 (|s|=36), compute  2 -values of s for block length with 1, 2, 3."

Similar presentations


Ads by Google