Presentation is loading. Please wait.

Presentation is loading. Please wait.

Channel Coding: Part I Presentation II Irvanda Kurniadi V. (20127734) 2013.4.5 Digital Communication 1.

Similar presentations


Presentation on theme: "Channel Coding: Part I Presentation II Irvanda Kurniadi V. (20127734) 2013.4.5 Digital Communication 1."— Presentation transcript:

1 Channel Coding: Part I Presentation II Irvanda Kurniadi V. (20127734) 2013.4.5 Digital Communication 1

2 Outline Structured Sequence –Channel Models –Code Rate and Redundancy –Parity Check Codes –Why Use Error-Correction Coding Linear Block Codes –Vector Spaces –Vector Subspaces –Linear Block Code Example –Generator Matrix –Systematic Linear Block Codes –Parity-Check Matrix –Syndrome Testing –Error Correction –Decoder Implementation 2

3 Structure Sequence Channel Model –Discrete Memoryless Channel –Binary Symmetric Channel –Gaussian Channel Code Rate and Redundancy –Code-Element Nomenclature Parity-Check Codes –Single-Parity-Check Code –Rectangular Code Why Use Error-Correction Coding? –Error Performance vs Bandwidth –Power vs Bandwidth –Coding Gain –Data Rate vs Bandwidth –Capacity vs Bandwidth –Code Performance at Low Values of E b /N 0 3

4 Channel Model Discrete Memoryless Channel (DMC) –It is characterized by a discrete input alphabet, a discrete output alphabet, and a set of coditional probabilities P(j|i), where I represents a modulator M-ary input symbol, j represents a demodulator Q-ary output symbol, and P(j|i) is the probability of receiving j given that I was transmitted. 4

5 Channel Model Binary Symmetric Channel (hard-decision decoding) –It is a special case of a DMC; the input and output alphabet sets consist of the binary elements (0 and 1) and the conditional probabilities are symetric. –Then the channel symbol error probability is found using the methods in 4.7.1 and equation 4.79 to be Where is the channel symbol energy per noise density and Q is co-error function 5

6 Channel Model Gaussian Channel (soft-decision decoding) –It is also special case of a DMC; a discrete input alphabet and a continuous output alphabet over the range (-∞, ∞) and the channel adds noise to the symbols. –Since the noise is a Gaussian random variable with zero mean and variance σ 2, the resulting probability density function (pdf) of the received random variable z, conditioned on the symbol μ k can be written as –Block codes are usually implemented with hard decision decoders. 6

7 Code Rate Redundancy Source data  segmented into blocks of k data bits ( message bits ) Each block can represent any one of 2 k distinct message. Encoder transform each k-bit data block into a larger block of n bits called code bits or channel symbols. The (n-k) bits are called redundant bits, parity bits, or check bits. The ratio of redundant bits to data bits ((n-k)/k) is called redundancy of the code. The ratio of data bits to total bits (k/n) is called the code rate. Code-Element Nomenclature –Code bit and channel bit are most descriptive for binary codes only –Code symbol and channel symbol are more general and often preferred –The terms “parity bit” and “parity symbol” are used only to code that represent the redundancy components added to the original data. 7

8 Parity-Check Code Single-Parity-Check Code –A single-parity- check code is constructed by adding a single- parity bit to a block of data bits. –Probability of j errors occurring in a block of n symbols as –So, the probability of an undetected error Pnd with a block of n bits is computed as follows: Where p is probability that a channel symbol is received in error 8

9 Example: MessageParityCode Word 00000 10011 01011 11000 00111 10100 01100 11111 paritymessage Compute the probability of an undetected message error if the probability of a channel symbol error is p = 10 -3 9

10 Parity-Check Code Rectangular Code (product code) –It can be thought of as a parallel code structure. ( look at figure ) –The message bits are formed by M rows and N columns; a horizontal parity check (row) and a vertical parity check (column), resulting an augmented array of dimension (M+1)x(N+1), with the rate of rectangular code as –The equation to get the probability of message error (block error) for a code that can correct all t and fewer error: (36,25) code 10

11 Why Use Error-Correction Coding? Error Performance vs Bandwidth Power vs Bandwidth Coding Gain Data rate vs Bandwidth Capacity vs Bandwidth Code Performance at Low Values of E B /N 0  Coding gain  where R is data rate, P r is received power, N 0 is noise power in a 1-Hz bw 11

12 Linear Block Codes Vector Spaces Vector Subspaces A (6,3) Linear Block Code Example Generator Matrix Systematic Linear Block Codes Parity-Check Matrix Syndrome Testing Error Correction –The Syndrome of Coset –Error Correction Decoding –Locating The Error Pattern –Error Correction Example Decoder Implementation 12

13 Vector Space The set of all binary n-tuples, V n, is called a vector space over the binary field of two elements (0 and 1). The binary field has two operation: addition and multiplication, and the results are in the same set of two elements. AdditionMultiplication 0+0=00. 0 = 0 0+1=10. 1 = 0 1+0=11. 0 = 0 1+1=01. 1 = 1 13

14 Vector Subspaces A subset S of the vector space V n is called a subspace if the following two conditions are met: –The all-zeros vector is in S –The sum of any two vectors in S is also in S (known as the closure property). Example: If there is vector space V 4 that is populated by the following 2 4 = sixteen 4-tuples 00000001001000110100010101100111 10001001101010111100110111101111 Subset V4 that forms a subspace is 0000010110101111 14

15 A (6, 3) Linear Block Code Example There is a (6, 3) code  2 k =2 3 =8 vectors. There are 2 n =2 6 = (64) 4-tuples in the V6 vector space. Message vectorCodeword 000000000 100110100 010011010 110101110 001101001 101011101 011110011 111000111 15

16 Generator Matrix If k is large, a table look-up implementation of the encoder becomes prohibitive. Imagine if there is a (127,92) code. Each of set of 2 k codewords (U) can be describe by eq. 1. We can define a generator matrix by following kxn array shown in equation 2. Message m is consist of eq. 3. The generation of the codeword U is written in matrix notation as the product of m and G (eq. 4) …(1) …(2) …(3) …(4) 16

17 Example = 1 1 0 1 0 0 + 0 1 1 0 1 0 + 0 0 0 0 0 0 = 1 0 1 1 1 0 If there is G as a generator matrix from V1, V2, and V3. Given m= 1, 1, 0. With this information, we can define the generation of the codeword U. 17

18 Systematic Linear Block Code A systematic (n, k) linear block is a mapping from a k-dimensional message vector to an n-dimensional codeword in such a way that part of the sequence generated coincides with the k message digits and the remaining (n-k) digits are parity digits. 18

19 Parity Check Matrix Parity-check matrix (H matrix) will enable to decode the received vectors. The components of H matrix are written as U is a codeword generated by matrix G if, and only if UH T =0 19

20 Syndrome Testing The syndrome is the result of a parity check performed on r to determine whether r is a valid member of the codeword set. Note the following two required properties of the parity check matrix: –No column of H can be all zeros, or else an error in the corresponding codeword position would not affect the syndrome and would be undetectable. –All columns of H must be unique. If two columns of H were identical, errors in these two corresponding codeword positions would be indistinguishable. e=e 1,e 2,…,e n  error vector r=r 1,r 2,…,r n  received vector  Combining both equation 20

21 Error Correction 2 n n-tuples that represent possible received vectors in an array is called standard array; the first arrow contain all the codewords starting with the all-zeros codeword, and the first column contains all the correctable error patterns. Each row, called coset, consists of an error pattern in the first column, called coset leader. The standard array format as follows Each coset consists of 2k n-tuples, therefore there are (2 n /2 k )=2 n-k cosets Codeword U i (i=1,...,2 k ) is transmitted over a noisy channel, resulting a corrupted vector U i +e j. If the error pattern e j caused by the channel is a coset leader, the received vector will be decoded correctly into the transmitted codeword U i. 21

22 Error Correction Syndrome of a coset –The syndrome of this n-tuple can be written as Error Correction Decoding –The procedure proceeds as follows: Calculate the syndrome of r using S=rH T Locate the coset leader (error pattern) e j, whose syndrome equals rH T This error pattern is assumed to be the corruption caused by the channel The corrected received vector, or codeword, is identified as U=r+ej. We retrieve the valid codeword by subtracting out the identified error.  Since U i is a code vector, U i H T =0 22

23 Error Correction Locating the error pattern 000000110100011010101110101001011101110011000111 000001110101011011101111101000011100110010000110 000010110110011000101100101011011111110001000101 000100110000011110101010101101011001110111000011 001000111100010010100110100001010101111011001111 010000100100001010111110111001001101100011010111 100000010100111010001110001001111101010011100111 010001100101001011111111111000001100100010010110 Error PatternSyndrome 000000000 000001101 000010011 000100110 001000001 010000010 100000100 010001111 Syndrome look-up table 23

24 Example: Error Correction The estimated transmitted codeword: Assume that codeword U=1 0 1 1 1 0, from (6,3) linear code block example, is transmitted, and the vector r= 0 0 1 1 1 0 is received. Show how decoder can correct the error (by using syndrome look-up table) Solution: The corrected vector is then estimated by = 0 0 1 1 1 0 + 1 0 0 0 0 0 = 1 0 1 1 1 0 24

25 Decoder Implementation 25

26 Question Consider a (24,12) linear block code capable of double- error corrections. Assume that a noncoherently detected binary orthogonal frequency-shift keying (BFSK) modulation format is used and that the received E b /N 0 = 14 dB. a.Does the code provide any improvement in probability of message error? If it does, how much? If it does not, explain why not! b.Repeat part (a) with E b /N 0 = 10 dB! 26

27 THANK YOU 27


Download ppt "Channel Coding: Part I Presentation II Irvanda Kurniadi V. (20127734) 2013.4.5 Digital Communication 1."

Similar presentations


Ads by Google