#4 1 Victor S. Frost Dan F. Servey Distinguished Professor Electrical Engineering and Computer Science University of Kansas 2335 Irving Hill Dr. Lawrence, Kansas Phone: (785) FAX:(785) How to cope with last hop impairments? Part 1 Error Detection and Correction #4 All material copyright 2006 Victor S. Frost, All Rights Reserved
#4 2 How to cope with last hop impairments? Techniques for coping with noise –Forward error detection/correction coding –Automatic Repeat reQuest (ARQ) –Co-existance or modifications to end-to-end protocols Techniques for coping with multipath fading fading mitigation techniques, e.g., –Equalizers –Diversity –RAKE receivers –OFDM
#4 3 Techniques for coping with noise Forward error detection/correction coding –Detection of bit error is needed to enable reliable communications –Requires the addition of bits (redundancy) to packets If the time between packet error is long then detection and request retransmission (see ARQ) If the time between packet error is short then often it “pays” to add enough bits to correct errored bits –Wireless –Fiber links Real time applications do not have time for retransmissions
#4 4 Error Detection Encode in.data.frame k bits out.data.frame or code word c j n bits n>k r = n -k =redundant bits added to detect errors detect errors
#4 5 Error Detection nExample: n=2k Repeat the frame k redundant bits Code word has 2k bits
#4 6 Error Detection Example: Simple Parity Check Even Parity – number of ‘1’ per block is even Odd Parity –number of ‘1’ per block is odd Redundancy ratio = k+1/k
#4 7 Error Detection Performance criteria for error detection codes is the probability of an undetected error,
#4 8 Error Detection Example: Find the time between undetected errors for: n = 1000 p = 1 in million Rate = 100 Mb/s For simple parity check:
#4 9 Hamming distance Hamming distance=the number of bits two code words differ –XOR the code words –Hamming distance = d=3 Code Word Code Word For d=3 it takes 3 bit errors to map one code word into another In general it take d bit errors to map one code word into another
#4 10 A Geometric Interpretation of Error correcting Codes
#4 11 Hamming distance There are 2 n possible code words Only 2 k are valid Among all the 2 k valid code words c i and c j have the minimum Hamming distance d Then the complete code is defined by this Hamming distance d To detect m errors you need a code with a Hamming distance of d = m+1, clearly m errors can not map a valid code word into another valid code word To correct m errors you need a code with a Hamming distance of d = 2m + 1 –Decoding algorithm Let r be the received code word Calculate the Hamming distance between r and each valid code word Declare the valid code word, e.g., c i with that has the minimum Hamming to be correct –So m errors will result in a received code word that is “closer” to c i then all other code words
#4 12 Data Code word All possible received Code words
#4 13 Block Codes Representation and manipulation of message blocks
#4 14 Block Codes
#4 15 Block Codes
#4 16 Block codes: Example c 1 = d 1 c 2 = d 2 c 3 = d 3 c 4 = d 1 + d 3 c 5 = d 2 + d 3 c 6 = d 1 + d 2 Let d= (0, 0, 1) –C = (0, 0, 1, 1, 1, 0)
#4 17 Block Codes
#4 18 Block Codes G Defines a Linear Systematic (n,k) Code k Message Bits r Code Bits Code Word of n bits
#4 19 Block Codes: Example
#4 20 Block Codes Problem: given a received message determine if an error has occurred. e = (001010) Errors in: bit 3 & 5
#4 21 Block Codes Define a Parity Check Matrix H
#4 22 Block codes
#4 23 Block Codes: Example
#4 24 Binary Cyclic Codes A subclass of linear block codes are the binary cyclic codes. If c k is a code word in a binary cyclic code then a lateral (or cyclic) shift of it is also a code word.
#4 25 Binary Cyclic Codes Advantages: –Ease of syndrome calculation –Simple and efficient coding/decoding structure Cyclic codes restrict the form of the G matrix The cyclic nature of these codes indicates that there is a underlying pattern.
#4 26 Binary Cyclic Codes g i = 0, 1
#4 27 Polynomial Coding Code has binary generating polynomial of degree n–k k information bits define polynomial of degree k – 1 Define the codeword polynomial of degree n – 1 b(x) = x n-k i(x) + r(x) n bits k bits n-k bits g(x) = x n-k + g n-k-1 x n-k-1 + … + g 2 x 2 + g 1 x + 1 i(x) = i k-1 x k-1 + i k-2 x k-2 + … + i 2 x 2 + i 1 x + i 0 Modified from: Leon-Garcia & Widjaja: Communication Networks
#4 28 Binary Cyclic Codes: Construction of G from g(x) 1) Use g(x) to form the k th row of G. 2) Use k th row to form (k-1) row by a shift left, or xg(x). 3) If (k-1) row in not in standard form then add kth row to shifted row, i.e.., (k-1) row becomes xg(x) + g(x). 4) Continue to form rows from the row below. 5) If the (k-j) row not in standard form the add g(x) to the shifted row.
#4 29
#4 30 Binary Cyclic Codes
#4 31 Binary Cyclic Codes: Standard generator polynomials
#4 32 Convolutional Codes* *Based on: Example: u i = input bits v i = output bits 1 input bit 3 output bits Rate 1/3 code Calculation of output bits When next input bit u 2 arrives contents of the register shift to right by 1 and a new set of 3 outputs calculated = x 2 + x + 1 (1,1,1) = x + 1 (0,1,1) = x (1,0,1) Calculation of output bits: Generator polynomials
#4 33 Convolutional Codes* The properties of the code are governed by –(n, k, m) –Generator polynomials *Based on:
#4 34 Convolutional Codes* Specification of convolution codes –(n, k, m) k = number of input bits n = number of output bits m = –For k = 1: m = maximum number of input bits used to calculate an output bit –k > 1 later –In the example (3, 1, 3) code –Note a coded bit is a function of several past input bits, the length of the memory is represented by the Constraint Length = L = k(m-1) –In the example 1(3-1) = L = 2 –Note then that the number of states the code can be is 2 L *Based on:
#4 35 Convolutional Codes* For k>1 the form of the coder can be represented as *Based on: ukuk u1u1 U -mk... u U-0U-0 u -k+1 12 m v1v v2v2 vnvn
#4 36 Convolutional Codes* Example: Note that the output bits depend on the current k inputs plus k(m-1) previous input bits so L = k(m-1) For k>1 m= L/k + 1 For this example L = 6, k = 3 m=3 *Based on:
#4 37 Convolutional Codes* Systematic convolution codes *Based on:
#4 38 Convolutional Codes* Example: –n=2, k=1, m=4 (2,1, 4) –L = 3 –Number of states = 8 000…111 *Based on:
#4 39 Convolutional Codes* Find the output sequence for in 1 0 input, assume encoder starts in state 000 Output sequence = –From first 1 –From shifting the input “0” 0000 –Output *Based on:
#4 40 Convolutional Codes* Find the output sequence for in 0 input, assume encoder starts in state 000 Output sequence = Note “response” to a “0” given an initial state of 000 is Note “response” to a “1” given an initial state of 000 is The “response” to a “1” in given an initial state of 000 is called the impulse response. Now an output to any input can be found by convolving the input with the impulse response *Based on:
#4 41 Convolutional Codes* *Based on: Shift and add
#4 42 Convolutional Codes* Graphical view of encoding process Trellis *Based on: States Input bit Output bits After L bits (L=3 here) trellis fully populated, i.e., all possible transitions are included. The Trellis repeats after that.
#4 43 Convolutional Codes* Finding the output for *Based on: 0(01) Output
#4 44 Convolutional Codes* Approaches –Compare received sequence to all permissible sequences pick “closest” (Hamming Distance) Hard decision –Pick sequence with highest correlation Soft decision Problem given received message length of s bits decode without “checking” all 2 s codewords –Sequential decoding Fano –Maximum likelihood Viterbi *Based on:
#4 45 Convolutional Codes* Sequential decoding –Move forward and backward in trellis –Keeps track of results and can backtrack Continue example (2,1,4) code –Input Last three bits used to flush out the last bit these are called flush bits or end tail bits –With no errors –Assume received bits are *Based on:
#4 46 Convolutional Codes* Sequential decoding –Based on channel stats set an “error threshold” = 3 –Decoder starts n state 00 looks at first 2 bits 01 Note starting in state 00 only possible outputs are 00 and 11 Knows there is an error but not the position of the error *Based on:
#4 47 Convolutional Codes* Randomly assumes that a 0 was transmitted and selects the path corresponding to 00 in the trellis Set error count = 1 *Based on: Start
#4 48 Convolutional Codes* Now at point 2: Incoming bits are 11 so assume a 1 transmitted *Based on: Start
#4 49 Convolutional Codes* Now at point 3: Incoming bits are 0 1 so another error increment error count; error count = 2 randomly assume a 0 transmitted *Based on: Start
#4 50 Convolutional Codes* Now at point 4: Incoming bits are 11 so another error increment error count; error count = 3; too many so back track *Based on: Start
#4 51 Convolutional Codes* Now at point 3: Incoming bits are 01 so another error, now another error and go to point 2 *Based on: Start
#4 52 Convolutional Codes* Now at point 2: But point 2 was correct (taking other path will cause error) so backtrack to point 1 and assume a 1 was sent. Now from point 1 assuming a 1 was sent the forward path through the trellis will not encounter any errors *Based on: Start
#4 53 Convolutional Codes* Correct path through the Trellis *Based on:
#4 54 Convolutional Codes* Viterbi decoding –Assumes infrequent errors –Random error i.e., Prob(consecutive bits errors)< Prob(single bit error) –For a given received sequence length Computes metric (e.,g Hamming for hard decision) for each path Makes decision based on metric All paths through Trellis are followed until two paths converge The path with the higher metric survives (called the survivor) Metrics are added and path with largest is the winner *Based on:
#4 55 Convolutional Codes* Continue example (2,1,4) code –Input Last three bits used to flush out the last bit these are called flush bits or end tail bits –With no errors –Assume received bits are (7 – tuples) Decoder starts in state 000 *Based on:
#4 56 Convolutional Codes* Step 1: Two paths are possible, compute metric for each path an go on… *Based on:
#4 57 Convolutional Codes* Step 2: Now there are 4 paths, *Based on: =M(0111, 1111) =M(0111, 0000) =M(0111, 0011) =M(0111, 1100) Metric=M(X, Y)= number bits the same
#4 58 Convolutional Codes* Step 3: Now have all 8 states covered *Based on:
#4 59 Convolutional Codes* Step 4: Now there are multiple paths and the metric is calculated for each path, the largest survive *Based on:
#4 60 Convolutional Codes* Step 4: After discarding *Based on:
#4 61 Convolutional Codes* Step 5: Continue note some ties, keep both paths if equal *Based on:
#4 62 Convolutional Codes* Step 6 *Based on:
#4 63 Convolutional Codes* Step 7: Used up all the input and have a winner: states visited *Based on: M=8 M=13
#4 64 Convolutional Codes* Punctured codes –Take a rate 1/n code and not send some of the bits –Example this is a 2/3 code Do not send the circled output *Based on:
#4 65 Convolution Codes For an interactive example see /cov/cov_html.html#mainstarthttp:// /cov/cov_html.html#mainstart
#4 66 Errors Random Codes usually assume that bit errors are statistically independent Burst However fading and other effects induce burst errors, or a string of consecutive errors. Interleaving make burst errors look like random errors
#4 67 Interleaving Write into Memory Read out ofMemory
#4 68 Interleaving Increases delay, must read in and buffer entire block of data before it can be sent. Used in CD players –Specifications: correct 90 ms of errors –at 44 kb/s & 16 bits/sample 4000 consecutive bit errors
#4 69 References Leon-Garcia & Widjaja: Communication Networks, McGraw Hill, onvo.pdfhttp://complextoreal.com/chapters/c onvo.pdf S.H. Lin, D.J. Castello: 'Error Correction Coding',. McGraw Hill, 1983