Presentation is loading. Please wait.

Presentation is loading. Please wait.

CHANNEL CODING TECHNIQUES By K.Swaraja Assoc prof MREC

Similar presentations


Presentation on theme: "CHANNEL CODING TECHNIQUES By K.Swaraja Assoc prof MREC"— Presentation transcript:

1 CHANNEL CODING TECHNIQUES By K.Swaraja Assoc prof MREC

2 Introduction Errors are introduced in the data when it passes through the channel . The channel noise interferes the signal. The signal power is also reduced. Coding is a procedure for mapping a given set of messages (m1,m2,m3…………) into a new set of encoded messages (c1,c2,c3,……….cn) in such a way that the transformation is one to one for each message and is known as source coding. It is possible to device codes for special purposes such as secrecy or minimum probability of error without relevance to the efficiency of transmission is known as channel coding.

3 Advantages of coding Improves transmission efficiency
Reduces the probability of error and helps in correction of errors. The transmission of data over the channel depends upon two parameters , they are transmitted power and channel bandwidth. The power spectrum density of channel noise determines signal to noise power ratio. The coding techniques also reduce signal to noise power for fixed probability of error

4 Need for coding To change the message quality to the acceptable level, reduce the SNR for fixed bit error rate. Reduction in SNR reduce the transmitted power and reduce the hardware cost since the smaller antenna size is required. The channel encoder adds extra bits (redundancy)to message bits. The encoded signal is transmitted over the noisy channel. The channel decoder identifies the redundant bits and uses them to detect and correct the errors in the message if any. Thus the number of errors introduced due to channel noise are minimized by encoder and decoder. Due to the redundant bits , the overall data rate increases. Hence the channel has to accommodate this increased data rate. The system become slightly complex because of coding techniques.

5 ERROR CONTROL CODING The redundant bits in the message are called check bits. Errors can be detected and corrected with the help of these bits. It is not possible to detect and correct all the errors in the message . Errors upto certain limit can only be detected and corrected. The check bits reduce the data rate through the channel.

6 Methods of controlling errors
There are two main methods used for error control coding Forward error correction Error detection with retransmission

7 Key terms in error control coding
Code word: The encoded block of ‘n’ bits is called a codeword . It contains message bits and redundant bits. Block length : The number of bits “n” after coding is called the block length of the code. Code rate(r): the ratio of the message bits(k) and the encoder output bits “n” is called code rate. Code rate is defined by “r’’ r=k/n <r<1 Channel data rate: it is the bit rate at the output of the encoder. if the bit rate at the input of the encoder is Rs then the channel data rate will be Ro= (n/k) Rs Hamming distance: hamming distance between the two code vectors is equal to the number of elements in which they differ. Transmission errors in the received code vector should be less than minimum distance “dmin”. Code efficiency: it is the ratio of message bits in a block to the transmitted bits for that block by the encoder. Efficiency=k/n

8 Types of codes Block codes 1)cyclic code 2)Hamming code
Convolution codes These codes can also be classified as 1)linear code 2)Non-linear code 3)systematic code 4)non-systematic code

9 Steps for determination of all code words for systematic linear block code
1) code vector may be written as X=(m1,m2,…….mk; c1,c2,c3……cq) Where q=n-k 2)‘q’ are the number of redundant data bits added by the encoder. The above code vector may also be written as X=(M/C) M=k-bit message vector, C= q-bit check vector. 3) The function of linear block code is to generate check bits The code vector can be represented as x=MG X= code vector of 1Xn of size or n-bits M=message vector of 1Xk size or k-bits G=generator matrix of kxn size [x]1xn = [M]1xk[G]kxn

10 4)generator matrix depends upon the linear block code used
4)generator matrix depends upon the linear block code used. Normally it is represented as G=[I k /pkxq] kxn I k= Ikxk –identity matrix P=kXq submatrix 5)Now the check vector can be obtained as C1Xq= M1Xk PKXq

11 Concept of parity check matrix (H)for linear block code
For each block code, there is a qxn parity check matrix (H) defined as Hqxn = [PT/Iq] qxn ,the p submatrix is pkxq Iq is the identity matrix if the generator matrix ‘G’ is given then parity check matrix ‘H’ can be obtained and vice versa. Hamming codes: it is a type of linear block code it can be used for detection and correction of errors occurred in more than one bit. hamming codes should satisfy the following condition The minimum distance of linear block code is equal to minimum weight of any non-zero word in the code. Minimum distance dmin=3 No/. Of check bits q=n-k≥ 3 Block length or length of the codeword n= 2q-1 No of message bits k=n-q

12 The code is given as r = k/n = length of message bits/length of code word r = (n-q)/n=1-q/n The code rate is given as r = n-q/n=1-q/n For (n,k) linear block code of ‘dmin’, can correct upto ‘t’ errors if and only if dmin ≥ 2t+1 t≤ (dmin-1)/2 ,dmin=odd t≤ (dmin-2)/2 ,dmin=even Can detect upto ‘s’ errors per word dmin ≥S+1 For the (n,k) block codes the dmin is dmin ≤ n-k+1

13 Method to correct errors (syndrome decoding)
Syndrome decoding is a method to correct errors in linear block coding. Let the transmitted code vector ‘x’ and corresponding received code vector be represented by ‘y’. The decoder detects or corrects those errors in ‘y’ by using the stored bit pattern in the decoder about the code. For larger block length more and more bits are required to be stored in the decoder. This increases the memory requirement and adds to the complexity and cost of the system. To avoid these problems, syndrome decoding is used in linear block codes.

14 Procedure for syndrome decoding
1) For every (n,k) linear block code, there exits a parity check matrix (H) of size qXn, defined as [H]qXn = [pT :Iq]qXn , HT =[p/Iq ]nXq Here ‘p’ is the sub matrix of size kxq , ‘Iq’ is the identity matrix of size qxq 2)HT has very important properly such as XHT=( …) , [X]1Xn[HT]nXq=(000….)1Xq This is true for all code vectors 3)Hence X belongs to the valid code vector at the transmitter end. At the receiver end the received code is ‘y’ then we can write YHT=(0000…0) If X=Y i.e no errors, ‘Y’ is a valid code vector 4)Whenever YHT is nonzero, some errors are present in ’Y’. the nonzero o/p of the product YHT is called syndrome and is used to detect the errors in ‘Y’. syndrome is represented by ‘s’ and may be written as S=YHT [S]1Xq =[Y]1Xq[HT ]nXq

15 Detecting and correcting error with the help of syndrome end error vector(E)
The nonzero element of ‘S’ represent error in the o/p, when all the elements of ‘s’ are zero, the two cases are possible i)No error in the o/p and Y=X ii)’Y’ is some other valid code vector other than ‘X’. This means that the transmission errors are undetectable. Now, let us consider an n-bit error vector ‘E’, let this vector represent the position of transmission errors in ‘Y’. Ex: E=(00101), nonzero entries represent errors in ‘Y’.  Using the modulo-2 additions we can write Y=X®E or X=Y®E Relation between syndrome vector and error vector S=YHT , S=(X®E)HT = XHT ® EHT =0®EHT =EHT  Thus syndrome depends upon error pattern only, it does not depend upon a particular message.

16 Error correction using syndrome vector
Let X=( ) Let Y=( ) Calculate S=YHT=[110] S=YHT=EHT=[110] We observe that S=110 is the 3rd of HT. The error pattern corresponding to this syndrome is E=[ ] There is an error in the 3rd bit of ‘Y’. The correct vector can be obtained as X = Y®E = ( ) ®( = ( ) Which is same as ‘X’. Thus a single bit error can be corrected using syndrome decoding and multiple errors are corrected using extended hamming codes, in these codes more extra bit is provided to correct double errors.

17 Cyclic codes A linear code is called cyclic code if every cyclic shift of code vector produces some other code vector. Cyclic codes are the subclass of linear block codes. Cyclic codes can be in systematic and non systematic form. An advantage of cyclic code vector over other types of codes is that they possess well defined mathematical structure, which has led to the development of very efficient decoding schemes for them. There are two important reasons to use cyclic codes Encoding and syndrome calculations can be easily implemented by using simple shift registers with feed back connections. The mathematical structure of these codes is such that it is possible to design codes having useful error correcting properties.

18 Convolutional codes In block coding the encoder accepts a k-message block and generates an n-bit code word. Thus code words are produced on a block by block basis. Clearly provision must be made in the encoder to buffer an entire message block before generating the associated code word. There are applications however where the message bits come in serially rather than in larger blocks ,in which case the use of buffer may be undesirable. In such situations ,the use of convolutional coding may be the preferred method. A convolutional encoder operates on the incoming message sequence continuously in a serial manner. The encoder of a binary convolutional code with rate 1/n, measured in bits per symbol may be viewed as a finite state machine that consists of an m-stage shift register with preserited connections to n-modulo-2 address and a multiplexer that serializes the outputs of the address. An L-bits ,message sequence produces a coded o/p sequences of length n(m+1) bits The code rate is r=L/(n|(L+m)) We have L>>m, then r˜=1/n bits/symbol or r=k/n, k=number of i/p message bits.

19 A convolutional coding is done by combing the fixed number of input bits , the input bits are stored in the fixed length shift register and they are combined with the help of mod-2 address. This operation is equivalent to binary convolution and hence it is called convolutional coding. thus the o/p bit stream for successive input bits will be X=x1,x2,…….. number of message bits ,k=1 number of encoded bits for one message bit ,n=2 Dimension of the code It is given by n & k, where ‘k’ is the number of message bits taken at a time by the encoder and ‘n’ is the encoded o/p bits for one message bit. Hence dimension of code is (n,k) Constraint length(m) Constraint length of convolution code is defined as the number of shifts over which a single message bit can influence the encoder output. This is expressed in terms of message bits.


Download ppt "CHANNEL CODING TECHNIQUES By K.Swaraja Assoc prof MREC"

Similar presentations


Ads by Google