Presentation is loading. Please wait.

Presentation is loading. Please wait.

Wireless Mesh Networks

Similar presentations


Presentation on theme: "Wireless Mesh Networks"— Presentation transcript:

1 Wireless Mesh Networks
Anatolij Zubow Coding and Error Control

2 Contents Error Detection Block Error Correction Codes
Convolutional Codes Automatic Repeat Request 2

3 Coping with Data Transmission Errors
Error detection codes Detects the presence of an error Automatic repeat request (ARQ) protocols Block of data with error is discarded Transmitter retransmits that block of data Error correction codes, or forward correction codes (FEC) Designed to detect and correct errors 3

4 Error Detection Probabilities
We assume that data are transmitted as a sequence of bits (frame) Definitions Pb : Probability of single bit error (BER) P1 : Probability that a frame arrives with no bit errors P2 : While using error detection, the probability that a frame arrives with one or more undetected errors P3 : While using error detection, the probability that a frame arrives with one or more detected bit errors but no undetected bit errors 4

5 Error Detection Probabilities
With no error detection Assuming that the probability that any bit is in error (Pb) is constant and independent for each bit F = Number of bits per frame P2 is the residual error rate ISDN Example 5

6 Error Detection Process
Transmitter For a given frame, an error-detecting code (check bits) is calculated from data bits Check bits are appended to data bits Receiver Separates incoming frame into data bits and check bits Calculates check bits from received data bits Compares calculated check bits against received check bits Detected error occurs if mismatch 6

7 Error Detection Process
7

8 Parity Check Simplest error detection scheme
Parity bit appended to a block of data Even parity Added bit ensures an even number of 1s Odd parity Added bit ensures an odd number of 1s Example, 7-bit character [ ] Even parity [ ] Odd parity [ ] Problems? 8

9 Parity Codes Two Dimensional Bit Parity*: Single Bit Parity:
Detect single bit errors Two Dimensional Bit Parity*: Detect and correct single bit errors *demo: 9

10 Cyclic Redundancy Check (CRC)
Most common and most powerful error-detecting code Can detect errors on large chunks of data Has low overhead + more robust than parity bit Requires the use of a “code polynomial”, e.g. x2 + 1 Transmitter For a k-bit block, transmitter generates an (n-k)-bit frame check sequence (FCS) Resulting frame of n bits is exactly divisible by predetermined number (code polynomial) Receiver Divides incoming frame by predetermined number (code polynomial) If no remainder, assumes no error 10

11 Cyclic Redundancy Check
Procedure: 1. Let r be the degree of the code polynomial. Append r zero bits to the end of the transmitted bit string. Call the entire bit string S(x) 2. Divide S(x) by the code polynomial using modulo 2 division. 3. Subtract the remainder from S(x) using modulo 2 subtraction. The result is the checksummed message 11

12 Generating a CRC Example
Message: x x x x2 + 1 x x + 1 = x3 + x + 1 Code Polynomial: x (101) Step 1: Compute S(x) r = 2 S(x) = 12

13 Generating a CRC Example (cont’d)
Step 2: Modulo 2 divide 101100 101 001 000 010 100 01 1001 Remainder 13

14 Generating a CRC Example (cont’d)
Step 3: Modulo 2 subtract the remainder from S(x) 101100 - 01 101101 Checksummed Message 14

15 Decoding a CRC Procedure
1. Let n be the length of the checksummed message in bits 2. Divide the checksummed message by the code polynomial using modulo 2 division. If the remainer is zero, there is no error detected. 15

16 Decoding a CRC Example 101101 Checksummed message (n = 6) 1011 Original message (if there are no errors) 101101 101 001 000 010 00 1001 Remainder = 0 (No error detected) 16

17 Decoding a CRC Another Example
When a bit error occurs, there is a large probability that it will produce a polynomial that is not an even multiple of the code polynomial, and thus errors can usually be detected. 101001 101 000 001 01 1000 Remainder = 1 (Error detected) 17

18 Choosing a CRC polynomial
The longer the polynomial, the smaller the probability of undetected error Common standard polynomials: (1) CRC-12: x12 + x11 + x3 + x2 + x1 + 1 (2) CRC-16: x16 + x15 + x2 + 1 (3) CRC-CCITT: x16 + x12 + x5 + 1 (4) CRC – 32: X32 + X26 + X23 + X22 + X16 + X12 + X11 + X10 + X8 + X7 + X5 + X4 + X2 + X + 1 18

19 Wireless Transmission Errors
Error detection requires retransmission (see ARQ) Detection inadequate for wireless applications Error rate on wireless link can be high, results in a large number of retransmissions Long propagation delay (e.g. satellite link) compared to transmission time of a single frame Solution Enable the receiver to correct errors in an incoming transmission on the basis of the bits in that transmission (Error Correction Codes) 19

20 Block Error Correction Codes
Transmitter Forward error correction (FEC) encoder maps each k-bit block into an n-bit block codeword Codeword is transmitted; analog for wireless transmission Channel Signal is subject to noise; may produce errors in signal Receiver Incoming signal is demodulated Block passed through an FEC decoder  4 possible outcomes (see next) 20

21 Forward Error Correction Process
21

22 FEC Decoder Outcomes No errors present
Codeword produced by decoder matches original codeword Decoder detects and corrects bit errors Decoder detects but cannot correct bit errors; reports uncorrectable error Decoder detects no bit errors, though errors are present (rare event) 22

23 Block Code Principles Hamming distance d(v1,v2) – for 2 n-bit binary sequences, the number of different bits E.g., v1=011011; v2=110001; d(v1, v2)=3 (n,k) block code – there are 2k valid codewords out of a total of 2n possible codewords Redundancy – ratio of redundant bits to data bits ((n-k)/k) Code rate – ratio of data bits to total bits (k/n); measure of additional bandwidth Coding gain – the reduction in the required Eb/N0 to achieve a specified BER of an error-correcting coded system 23

24 Block Code Design considerations
For a given n and k, we would like the largest possible value of dmin Codeword should be easy to encode and decode Extra bits (n-k) should be small  minimum overhead Extra bits (n-k) should be large  to reduce error rate 24

25 Hamming Code Designed to correct single bit errors with less overhead than 2D parity and detects double bit errors Family of (n, k) block error-correcting codes with parameters: Block length: n = 2m – 1 Number of data bits: k = 2m – m – 1 Number of check bits: n – k = m; m≥3 Minimum distance: dmin = 3; dmin =min[d(wi,wj)], i≠j Single-error-correcting (SEC) code SEC double-error-detecting (SEC-DED) code 25

26 Hamming Code Process Encoding: k data bits + (n -k) check bits
Decoding: compares received (n -k) bits with calculated (n -k) bits using XOR Resulting (n -k) bits called syndrome word Syndrome range is between 0 and 2(n-k)-1 Each bit of syndrome indicates a match (0) or conflict (1) in that bit position 26

27 Calculating a Hamming Code
Procedure: Place message bits in their non-power-of-two Hamming positions Build a table listing the binary representation each each of the message bit positions Calculate the check bits Arrangement of the parity checkpoints p1 ... pk and the data bits d1 ... dN-k in a codeword (c1 ... cN). 27

28 Hamming Code Example Message to be sent: 1 0 1 1 1 0 1 1 1 2 3 4 5 6 7
Position 2n: check bits 28

29 Hamming Code Example Message to be sent: 1 0 1 1 1 0 1 1 1 2 3 4 5 6 7
Position 2n: check bits Calculate check bits: 3 = = 5 = = 6 = = 7 = = 29

30 Hamming Code Example Message to be sent: 1 0 1 1 1 0 1 1 1 2 3 4 5 6 7
1 Starting with the 20 position: Look at positions with 1’s in them Count the number of 1’s in the corresponding message bits If even, place a 1 in the 20 check bit, i.e., use odd parity Otherwise, place a 0 Position 2n: check bits Calculate check bits: 3 = = 5 = = 6 = = 7 = = 30

31 Hamming Code Example Message to be sent: 1 0 1 1 1 0 1 1 1 2 3 4 5 6 7
1 Repeat with the 21 position: Look at positions those positions with 1’s in them Count the number of 1’s in the corresponding message bits If even, place a 1 in the 21 check bit Otherwise, place a 0 Position 2n: check bits Calculate check bits: 3 = = 5 = = 6 = = 7 = = 31

32 Hamming Code Example Message to be sent: 1 0 1 1 1 0 1 1 1 2 3 4 5 6 7
1 1 Repeat with the 22 position: Look at positions those positions with 1’s in them Count the number of 1’s in the corresponding message bits If even, place a 1 in the 22 check bit Otherwise, place a 0 Position 2n: check bits Calculate check bits: 3 = = 5 = = 6 = = 7 = = 32

33 Hamming Code Example Original message = 1011 Sent message = 1011011
Now, how do we check for a single-bit error in the sent message using the Hamming code? 33

34 Using Hamming Codes to Correct Single-Bit Errors
Received message: Position 2n: check bits Calculate check bits: 3 = = 5 = = 6 = = 7 = = 34

35 Using Hamming Codes to Correct Single-Bit Errors
Received message: Starting with the 20 position: Look at positions with 1’s in them Count the number of 1’s in both the corresponding message bits and the 20 check bit and compute the parity. If even parity, there is an error in one of the four bits that were checked. Position 2n: check bits Calculate check bits: 3 = = 5 = = 6 = = 7 = = Odd parity: No error in bits 1, 3, 5, 7 35

36 Using Hamming Codes to Correct Single-Bit Errors
Received message: Repeat with the 21 position: Look at positions with 1’s in them Count the number of 1’s in both the corresponding message bits and the 21 check bit and compute the parity. If even parity, there is an error in one of the four bits that were checked. Position 2n: check bits Calculate check bits: 3 = = 5 = = 6 = = 7 = = Even parity: ERROR in bit 2, 3, 6 or 7! 36

37 Using Hamming Codes to Correct Single-Bit Errors
Received message: Repeat with the 22 position: Look at positions with 1’s in them Count the number of 1’s in both the corresponding message bits and the 22 check bit and compute the parity. If even parity, there is an error in one of the four bits that were checked. Position 2n: check bits Calculate check bits: 3 = = 5 = = 6 = = 7 = = Even parity: ERROR in bit 4, 5, 6 or 7! 37

38 Finding the error’s location
Position 38

39 Finding the error’s location
Position No error in bits 1, 3, 5, 7 39

40 Finding the error’s location
erroneous bit, change to 1 Position Error must be in bit 6 because bits 3, 5, 7 are correct, and all the remaining information agrees on bit 6 ERROR in bit 2, 3, 6 or 7 ERROR in bit 4, 5, 6 or 7 40

41 Finding the error’s location An Easier Alternative to the Last Slide
3 = = 5 = = 6 = = 7 = = E E NE = 6 1 1 E = error in column NE = no error in column 41

42 Hamming Codes Hamming codes can be used to locate and correct a single-bit error If more than one bit is in error, then a Hamming code cannot correct it Hamming codes, like parity bits, are only useful on short messages 42

43 Cyclic Codes Most commonly used error-correcting block code
Can be encoded and decoded using linear feedback shift registers (LFSRs) For cyclic codes, a valid codeword (c0, c1, …, cn-1), shifted right one bit, is also a valid codeword (cn-1, c0, …, cn-2) Takes fixed-length input (k) and produces fixed-length check code (n-k) In contrast, CRC error-detecting code accepts arbitrary length input for fixed-length check code 43

44 BCH Codes Most powerful cyclic block codes
For positive pair of integers m and t, a (n, k) BCH code has parameters: Block length: n = 2m – 1 Number of check bits: n – k ≤ mt Minimum distance: dmin ≥ 2t + 1 Correct combinations of t or fewer errors Flexibility in choice of parameters Block length, code rate 44

45 Reed-Solomon Codes Subclass of non-binary BCH codes
Data processed in chunks of m bits, called symbols An (n, k) RS code has parameters: Symbol length: m bits per symbol Block length: n = 2m – 1 symbols = m(2m – 1) bits Data length: k symbols Size of check code: n – k = 2t symbols = m(2t) bits Minimum distance: dmin = 2t + 1 symbols RS codes are well suited for burst error correction 45

46 Block Interleaving Data written to and read from memory in different orders Data bits and corresponding check bits are interspersed with bits from other blocks At receiver, data are de-interleaved to recover original order A burst error that may occur is spread out over a number of blocks, making error correction possible 46

47 Block Interleaving 47

48 Two major FEC technologies
Reed Solomon code (Block Code) Convolutional code (see next slides) Serially concatenated code Interleaving Reed Solomon Code Convolution Code ConcatenatedCoded (to modulator) Source Information BLOCK it-2 k bit it-1 it Information code wt-2 n bit wt-1 wt ENCODE it-2 k bit it-1 it wt-2 n bit wt-1 wt Constraint length K (n, k) Block Code 48 Convolutional Code

49 Convolutional codes Convolutional codes offer an approach to error control coding substantially different from that of block codes. A convolutional encoder: Encodes the entire data stream, into a single codeword. Does not need to segment the data stream into blocks of fixed size (Convolutional codes are often forced to block structure by periodic truncation). Is a machine with memory. This fundamental difference in approach imparts a different nature to the design and evaluation of the code. Block codes are based on algebraic/combinatorial techniques. Convolutional codes are based on construction techniques. 49

50 Convolutional codes-cont’d
A Convolutional code is specified by three parameters (n,k,K) or (k/n,K) where RC=k/n is the coding rate, determining the number of data bits per coded bit. In practice, usually k=1 is chosen and we assume that from now on. K is the constraint length of the encoder a where the encoder has K-1 memory elements. There is different definitions in literatures for constraint length. 50

51 A Rate ½ Convolutional encoder
Convolutional encoder (rate ½, K=3) 3 shift-registers where the first one takes the incoming data bit and the rest, form the memory of the encoder. Input data bits Output coded bits First coded bit Second coded bit (Branch word) 51

52 A Rate ½ Convolutional encoder
Message sequence: Time Output Time Output (Branch word) (Branch word) 1 1 1 1 52

53 A Rate ½ Convolutional encoder
1 Time Output (Branch word) Encoder 53

54 Effective code rate Initialize the memory before encoding the first bit (all-zero) Clear out the memory after encoding the last bit (all-zero) Hence, a tail of zero-bits is appended to data bits. Effective code rate : L is the number of data bits and k=1 is assumed: Encoder data tail codeword 54

55 Encoder representation – cont’d
Impulse response representation: The response of encoder to a single “one” bit that goes through it. Example: Branch word Register contents 55

56 State diagram A finite-state machine only encounters a finite number of states. State of a machine: the smallest amount of information that, together with a current input to the machine, can predict the output of the machine. In a Convolutional encoder, the state is represented by the content of the memory. Hence, there are states. A state diagram is a way to represent the encoder. A state diagram contains all the states and all possible transitions between them. Only two transitions initiating from a state Only two transitions ending up in a state 56

57 State diagram – cont’d 00 1 11 01 10 0/00 1/11 0/11 1/00 0/10 1/01
Current state input Next state output 00 1 11 01 10 0/00 Output (Branch word) Input 00 1/11 0/11 1/00 10 01 0/10 1/01 11 0/01 1/10 57

58 Trellis – cont’d Trellis diagram is an extension of the state diagram that shows the passage of time. Example of a section of trellis for the rate ½ code State 0/00 0/11 0/10 0/01 1/11 1/01 1/00 1/10 Time 58

59 Trellis –cont’d A trellis diagram for the example code 0/00 1/11 0/11
0/10 0/01 1/11 1/01 1/00 0/00 1 11 10 00 Input bits Output bits Tail bits 59

60 Trellis – cont’d 0/00 0/00 0/11 0/10 0/01 1/11 1/01 1/00 0/00 0/00
Input bits Tail bits 1 Output bits 11 10 00 0/00 0/00 0/11 0/10 0/01 1/11 1/01 1/00 0/00 0/00 0/00 1/11 1/11 0/11 0/11 0/10 0/10 1/01 0/01 60

61 Block diagram of the DCS
Information source Rate 1/n Conv. encoder Modulator Channel Information sink Rate 1/n Conv. decoder Demodulator 61

62 Optimum decoding If the input sequence messages are equally likely, the optimum decoder which minimizes the probability of error is the Maximum likelihood decoder. ML decoder, selects a codeword among all the possible codewords which maximizes the likelihood function where is the received sequence and is one of the possible codewords: codewords to search!!! ML decoding rule: 62

63 ML decoding for memory-less channels
Due to the independent channel statistics for memoryless channels, the likelihood function becomes and equivalently, the log-likelihood function becomes The path metric up to time index “i”, is called the partial path metric. Path metric Branch metric Bit metric ML decoding rule: Choose the path with maximum metric among all the paths in the trellis. This path is the “closest” path to the transmitted sequence. 63

64 Binary symmetric channels (BSC)
1 1 If is the Hamming distance between Z and U, then p Modulator input Demodulator output p 1-p Size of coded sequence ML decoding rule: Choose the path with minimum Hamming distance from the received sequence. 64

65 Soft and Hard Decisions
In hard decision: The demodulator makes a firm or hard decision whether one or zero is transmitted and provides no other information for the decoder such that how reliable the decision is. Hence, its output is only zero or one (the output is quantized only to two level) which are called “hard-bits”. Decoding based on hard-bits is called the “hard-decision decoding”. 65

66 Soft and hard decision-cont’d
In Soft decision: The demodulator provides the decoder with some side information together with the decision. The side information provides the decoder with a measure of confidence for the decision. The demodulator outputs which are called soft-bits, are quantized to more than two levels. Decoding based on soft-bits, is called the “soft-decision decoding”. On AWGN channels, 2 dB and on fading channels 6 dB gain are obtained by using soft-decoding over hard-decoding. 66

67 The Viterbi algorithm The Viterbi algorithm performs Maximum likelihood decoding. It find a path through trellis with the largest metric (maximum correlation or minimum distance). It processes the demodulator outputs in an iterative manner. At each step in the trellis, it compares the metric of all paths entering each state, and keeps only the path with the largest metric, called the survivor, together with its metric. It proceeds in the trellis by eliminating the least likely paths. It reduces the decoding complexity to ! 67

68 The Viterbi algorithm - cont’d
Do the following set up: For a data block of L bits, form the trellis. The trellis has L+K-1 sections or levels and starts at time t1 and ends up at time tL+K Label all the branches in the trellis with their corresponding branch metric. For each state in the trellis at the time ti which is denoted by , define a parameter Then, do the following: 68

69 The Viterbi algorithm - cont’d
Set and i = 2 At time ti, compute the partial path metrics for all the paths entering each state. Set equal to the best partial path metric entering each state at time ti Keep the survivor path and delete the dead paths from the trellis. If , increase i by 1 and return to step 2. Start at state zero at time Follow the surviving branches backwards through the trellis. The path thus defined is unique and correspond to the ML codeword. 69

70 Example of Hard decision Viterbi decoding
0/00 0/00 0/00 0/00 0/00 1/11 1/11 1/11 0/11 0/11 0/11 1/00 0/10 0/10 0/10 1/01 1/01 0/01 0/01 70

71 Example of Hard decision Viterbi decoding-cont’d
Label al the branches with the branch metric (Hamming distance) 2 1 2 1 1 1 1 1 2 1 1 2 2 1 1 71

72 Example of Hard decision Viterbi decoding-cont’d
2 2 1 2 1 1 1 1 1 2 1 1 2 2 1 1 72

73 Example of Hard decision Viterbi decoding-cont’d
2 3 2 1 2 1 1 1 3 1 1 2 1 1 2 2 1 2 1 73

74 Example of Hard decision Viterbi decoding-cont’d
2 3 2 1 2 1 1 1 3 2 1 1 1 2 3 1 2 2 1 2 3 1 74

75 Example of Hard decision Viterbi decoding-cont’d
2 3 1 2 1 2 1 1 1 3 2 1 1 1 2 3 2 1 2 2 1 2 3 1 75

76 Example of Hard decision Viterbi decoding-cont’d
2 3 1 2 2 1 2 1 1 1 3 2 1 1 1 2 3 2 1 2 2 1 2 3 1 76

77 Example of Hard decision Viterbi decoding-cont’d
Trace back and then: 2 3 1 2 2 1 2 1 1 1 3 2 1 1 1 2 3 2 1 2 2 1 2 3 1 77

78 Received signal has many level
In the previous example, we have assumed the received sequence is like Usually, received signal is analog (Many Levels) such as SIGNAL LEVEL 1 2 3 4 5 6 7 78

79 Hard Decision 1 1 1 1 1 1 1 1 1 HARD DECISION LINE 1 2 3 4 5 6 7
SIGNAL LEVEL 1 2 3 4 5 6 7 Loosing Reliability Information by Hard Decision! HARD DECISION OUTPUTS 1 1 1 1 1 1 1 1 1 Highly reliable 1 Low reliable 1 We have to distinguish! 79

80 Soft Decision Use soft decision metric One Example LEVEL 1 2 3 4 5 6 7
1 2 3 4 5 6 7 Branch Metric for ‘0’ Branch Metric for ‘1’ Difference -3 -2 -1 Reliable ‘1’ Reliable ‘0’ 00 00(branch metric = = 3) LEVEL= 65 HOW TO COMPUTE BRANCH METRIC No effect on Viterbi decoding! Looks like Erased or Punctured! 80

81 Soft Decision Viterbi Calculate Branch Metric based on Soft-Table
00 01 10 11 00(1) 11(0) 01(1) 10(0) 00(2) 01(0) 10(2) 01(2) 00(3) 00(4) 11(1) 10(4) t=0 t=1 t=2 t=3 t=4 t=5 t=6 65 LEVEL 63 54 36 66 27 LEVEL 1 2 3 4 5 6 7 Branch Metric for ‘0’ Branch Metric for ‘1’ 81

82 Soft Decision Viterbi(2)
Calculate Path metric in order to find minimum path metric path. Until t=6 00 01 10 11 00(1) 11(0) 01(1) 10(0) 00(2) 01(0) 10(2) 01(2) 00(3) 00(4) 11(1) 10(4) t=0 t=1 t=2 t=3 t=4 t=5 t=6 65 LEVEL 63 54 36 66 27 3 5 2 3 4 3 3 2 4 2 3 3 1 1 3 3 82

83 Soft Decision Viterbi(3)
Select Minimum Path Metric and get original information In this example : 00 01 10 11 00(1) 11(0) 01(1) 10(0) 00(2) 01(0) 10(2) 01(2) 00(3) 00(4) 11(1) 10(4) t=0 t=1 t=2 t=3 t=4 t=5 t=6 65 LEVEL 63 54 36 66 27 83

84 Punctured Convolutional Code
The example Encoder has Code Rate R=1/2 Punctured Convolutional Code means that one or some code output is removed. Then Code Rate can be modified 84

85 Merit of Punctured Code
Larger code rate is better. Using Punctured technology, When communication channel condition is good, weak error correction but high code rate can be chosen. When communication channel condition is bud, strong error correction but low code rate can be chosen. This capability can be supported by small circuit change. One coder or decoder can be used for several code rate addaptively 85

86 R=2/3 example Input data bits Non-punctured: Punctured: 86

87 Automatic Repeat Request
Mechanism used in data link control and transport protocols Relies on use of an error detection code (such as CRC) Flow Control Error Control 87

88 Flow Control Assures that transmitting entity does not overwhelm a receiving entity with data Protocols with flow control mechanism allow multiple PDUs in transit at the same time PDUs arrive in same order they’re sent Sliding-window flow control Transmitter maintains list (window) of sequence numbers allowed to send Receiver maintains list allowed to receive 88

89 Flow Control Reasons for breaking up a block of data before transmitting: Limited buffer size of receiver Retransmission of PDU due to error requires smaller amounts of data to be retransmitted On shared medium, larger PDUs occupy medium for extended period, causing delays at other sending stations 89

90 Sliding-Windows Protocol
90

91 Error Control Mechanisms to detect and correct transmission errors
Types of errors: Lost PDU : a PDU fails to arrive Damaged PDU : PDU arrives with errors 91

92 Error Control Requirements
Error detection Receiver detects errors and discards PDUs Positive acknowledgement Destination returns acknowledgment of received, error-free PDUs Retransmission after timeout Source retransmits unacknowledged PDU Negative acknowledgement and retransmission Destination returns negative acknowledgment to PDUs in error 92

93 Go-back-N ARQ Acknowledgments Contingencies
RR = receive ready (no errors occur) REJ = reject (error detected) Contingencies Damaged PDU Damaged RR Damaged REJ 93

94 Resources William Stallings, Wireless Communications and Networks, Prentice-Hall, 2005. Rutgers, CS 352, Spring 2006. Uppsala University, Signals & Systems, 2005. 94


Download ppt "Wireless Mesh Networks"

Similar presentations


Ads by Google