DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck
Position of Error Control Coding signal generator channel detector k input bits k output bits channel k k input bits k output bits k input bits signal generator coded signal generator detector detector/decoder n input bits n ECC coding coded modulation
Encoding code wordReplace a message of k information bits by a unique n bit word, called code word CODEThe collection of 2 k code words is called a CODE
Error control code with rate k/n message estimate channel decoder n Code word in receive There are 2 k code words of length n 2k2k Code book contains all processing
A pictorial view 2 n vectors2 k code words
decoder Compare received word with all possible codewords code words received Decode the code word with minimum # of differences („Most Likely“)
example code words: received: difference: best guess: only 1 difference
we have some problems Mapping from information to code words –generation of code words (mutually far apart) –storing of code book (2 k code words, length n) Decoding –Compare a received word with all possible code words
Definitions Hamming distance between x and y is d H := d(x, y) is the # of positions where x i y i The minimum distance of a code C is – d min = min { d(x, y) | x C, y C, x y} Hamming weight of a vector x is - w(x) := d(x, 0 ) is the # of positions where x i 0
example Hamming distance d( 1001, 0111) = 3 Minimum distance (101, 011, 110) = 2 Hamming weight w( ) = 4 Hamming was a famous scientist from Bell-lab and inventer of the Hamming code.
Performance A code with minimum distance d min is capable of correcting t errors if d min 2 t + 1. Proof: If t errors occur, then since d min 2 t + 1 an incorrect code word has at least t+1 differences with the received word.
picture 2t+1 differences A B t differences from A t differences from B
LINEAR CODES Binary codes are called linear iff the component wise modulo-2 sum of two code words is again a code word. Consequently, the all zero word is a code word.
LINEAR CODE GENERATOR The code words are - linear combinations of the rows of a binary generator matrix G with dimensions k, n - G must have rank k! Example: Consider k = 3, n = generator matrix G = (1,0,1)G = ( 0, 0, 1, 0, 1, 1)
Systematic codes Let in general the matrix G be written as | G = [ I k P ];G=0 1 0 | | k = 3, n = 6 The code generated is –linear, systematic – has minimum distance 3. –the efficiency of the code is 3/6.
Example (optimum) Single Parity check code d min = 2, k = n 0 1 G = [ I n-1 P ]= 00 01 1 All codewords have even weight!
Example (optimum) Repetition code: d min = n, k = 1 G = [ 1 1 1 ]
Equivalent codes Any linear code generator can be brought in “systematic form” G sys = k n k n n Note: the elementary operation have an inverse. Homework: give an example for k = 4 and n = 7 Elementary row operations Elementary column operations Non- systematic form
Bounds on minimum distance (Hamming) Linear codes have a systematic equivalent G –Minimum Hamming weight n – k + 1 (Singleton bound) # code words * # correctable error patterns 2 n Homework: show that Hamming codes satisfy the bound with equality!
Bounds on minimum distance (Gilbert) Start: Select codeword from 2 n possible words 1. Remove all words at distance < d min from selected codeword 2. Select one of the remaining as next codeword 3. Goto 1. unless no possibilities left. RESULT: homework: show that logM/n 1 – h(2p) for d min -1 = 2t 2pn; p < ¼
plot R = log 2 M/n p t/n 1-h(p) 1-h(2p) 0 singleton
Property The set of distances from all code words to the all zero code word is the same as to any other code word. Proof: d( x, y ) = d( x x, z = y x ) = d( 0, z ), by linearity z is also a code word.
Thus! the determination of the minimum distance of a code is equivalent to the determination of the minimum Hamming weight of the code words. The complexity of this operation is proportional to # of code words
example Consider the code words –00000 –01101 –10011 –11110 Homework: Determine the minimum distance
Linear code generator I(X) represents the k bit info vector ( i 0, i 1,..., i k-1 ) g(X) is a binary polynomial of degree ( n-k ) THEN: the code vector C of length n can be described by C(X) = I(X) g(X) all operations modulo-2.
EX: k = 4, n = 7 and g(X) = 1 + X + X 3 For the information vector (1,0,1,0) C(X) = (1 + X 2 ) ( 1 + X + X 3 ) = 1 + X + X 2 + X 5 (1,1,1, 0, 0,1, 0). the encoding procedure in (k x n) matrix form: G = c = I * G
Implementation with a shift-register The following shift register can be used: g(X) = ( 1 + X + X 3 ) i k-1... i 2 i 1 i 0 Homework: give a description of the shift control to obtain the result
Some remarks Generators for different k and n –are constructed using mathematics –listed in many text books What remains is the decoding!
Hamming codes Minimum distance 3 Construction –G = I m All k-tuples of Hamming weight > 1 –where m = Check that the minimum distance is 3! Give the efficiency of the code
Example k = 4, n = G =
Syndrome decoding Let G = [ I k P ] then construct H T = P I n-k For all code words c = xG, cH T = xGH T = 0 Hence, for a received noisy vector ( c n ) H T = c H T n H T = n H T = : S
example G = H T = x = c = c H T = n = c n = [c n] H T = S = Obvious fast decoder: precalculate all syndromes at receiver for correctable errors
In system form c nc n Calculate syndrome [c n] H T = S Precalculated syndromes n* c nc n c n n* when n = n* then n n* = 0 Homework: choose parameters that can be implemented
Reed Solomon Codes (CD, DVD) Structure: m k information symbols n-k check symbols Properties: minimum distance = n-k+1 (symbols) length 2 m -1
General remarks The general problem is the decoding –RS codes can be decoded using Euclids algorithm Berlekamp Massey algorithm
Why error correction? Systems with errors can be made almost error free –CD, DvD would not work without RS codes
Why error correction? In ARQ systems systems collaps can be postponed! troughput 100% k/n % Channel error probability 01
For Additive White Gaussian Noise Channels Error probability p e -Es/No –where Es is energy per transmitted symbol –No the one-sided noise power spectral density For an uncoded system p e -Eb/No For a coded system with minimum distance d –nEs = kEb and thus p c e -d k/n Eb/No CONCLUSION: make Coding gain = d k/n > 1