Presentation is loading. Please wait.

Presentation is loading. Please wait.

MD. TARIQ HASAN SoC Design LAB Department of Information and Communication Engineering College of Electronics and Information Engineering Chosun University.

Similar presentations


Presentation on theme: "MD. TARIQ HASAN SoC Design LAB Department of Information and Communication Engineering College of Electronics and Information Engineering Chosun University."— Presentation transcript:

1 MD. TARIQ HASAN SoC Design LAB Department of Information and Communication Engineering College of Electronics and Information Engineering Chosun University GwangJu, Korea mdthasan@gmail.com Advisor: Professor GoangSeog Choi (Ph.D.)

2 Submitted to: Jae-Young Pyun (Ph. D.) Department of Information and Communication Engineering College of Electronics and Information Engineering Chosun University GwangJu, Korea

3 Contents 3 Introduction Convolutional Encoder & its characteristics Convolutional Encoder Representation Encoding ML Decoding Hard and Soft Decision Decoding Log-likelihood Function Viterbi Algorithm Catastrophic Error & Performance Best Known Convolutional Codes Sequential Decoding & Feedback Decoding References Questions

4 Introduction 4 Fig. 1: General communication block diagram showing channel encoder and decoder. Format Source decode Dencrypt Channel Decode Demult- iplex Detect Demodulate & Sample Frequency Despread Multiple Access Format Source Encode Encrypt Channel Encode Multi- plex Pulse Modulate Bandpass Modulate Frequency Spread Multiple Access Information Source Information Sink C h a n n e l Convolutional Code

5 Introduction…. 5 Convolutional Code Fig. 2: Encoder/decoder and modulator/demodulator blocks of a simplex communication link. Information Source Information Source Rate 1/n Convolutional Encoder Rate 1/n Convolutional Encoder Modulator Information Sink Information Sink Rate 1/n Convolutional Decoder Rate 1/n Convolutional Decoder Demodulator AWGN Channel AWGN Channel

6 Introduction…. Convolutional Code 6 Convolutional Code (n, k, K) is an error detecting and correcting code with low implementation complexity. Convolution: twist K is known as constraint length and represents the shift register size. Since it not a block code and can work on the fly, n does not necessarily indicate the block or codeword length [1]. Fig. 3: (a) Convolutional encoder and decoder (b) twist.

7 Convolutional Encoder 7 Convolutional Code Fig. 4: Convolutional encoder with constraint length K and rate k/n. 1 2 3 ….. kK 1 2 ….. kK n modulo-2 adder m = m 1, m 2, …m i,… input sequence shifted in k bits at a time kK stage shift register and u ji =jth binary code symbol of branch word U i o/p

8 Characteristics 8  Convolutional encoder does not need to segment the data (stream into blocks of fixed size). Convolutional Code  It encodes the complete data stream, into a single codeword.  It is a finite state machine with memory.  It’s effective code rate is less than the k/n.

9 Convolutional Encoder Representation 9 Convolutional Code Fig. 5: A convolutional encoder representation (a) connection pictorial (b) polynomials (c) state diagram. 00 0 g(X) = 1 + X + X 2 A convolutional encoder can be represented in following ways: (b) (a) (c) No time information i/p o/p

10 10 Tree diagram Convolutional Code Fig. 6: (a) Tree, traversing upward (b) 90 o rotated tree (c) tree diagram. Convolutional Encoder Representation…. (a)(b)(c)

11 11 Trellis diagram Convolutional Code Fig. 7:Trellis diagram of a convolutional encoder and it has time information. 1/11 0/00 1/10 1/11 0/00 0/11 0/10 0/01 1/11 1/01 1/00 0/00 10100 1110001011 i/p Flush/Tail bits 1/11 0/10 0/11 1/00 1/01 0/01 1/10 t 1 t 2 t 3 t 4 t 5 t 6 o/p Convolutional Encoder Representation….

12 Encoding 12 For message m = 101, the encoder produces o/p in the following way. Convolutional Code Encoding Fig. 8: Convolutional encoding for the message m= 101. 00 0 u1u1 u2u2 i/p o/p 101 o/p 10 00 1 u1u1 u2u2 i/p 1111 11 10 0 u1u1 u2u2 i/p o/p 1 1010 10 01 1 u1u1 u2u2 i/p o/p 0 0000 00 10 0 u1u1 u2u2 i/p o/p 0 1010 10 01 0 u1u1 u2u2 i/p o/p 0 1111 11 (a)(b) (c) (d)(e) (f)

13 State Diagram 13 State can be drawn from this example. Convolutional Code Fig. 9: State diagram drawing. X1X1 X2X2 X3X3 Next State Present State States a =00 b =10 c =01 d =11

14 Representation of Encoder 14 Traversing tree diagram. Convolutional Code Fig. 10: Encoding and corresponding branches in a tree diagram. t 1 t 2 t 3 t 4 t 5 0101

15 Maximum Likelihood (ML) Decoding 15 If all input message sequences are equally likely, a decoder that achieves the minimum probability of error is one that compares the conditional probabilities, also called the likelihood functions P(Z|U (m) ), where Z is the received sequence and U (m) is one of the possible transmitted sequences, and chooses the maximum. Convolutional Code Fig. 11: ML decoding understanding. ML chooses the most likely sequence. Over all U (m) ALL ?! Find NEO. Which person is Neo? p 1 p 2 p 3 p 4 p 5 p 6 p 7 She (ML decoder) can identify with minimum error. Here p 4 < p 3 < p 2 < p 2 < p 1 < p 5 < p 6 < p 7

16 16 For a length of L branch words, there are 2 L possible sequences. Therefore, in the maximum likelihood context, it can be said that the decoder chooses a particular U (m') as the transmitted sequence if the likelihood P(Z|U (m') ) is greater than the likelihoods of all the other possible transmitted sequences. Convolutional Code Fig. 12:The lowest unit level comparison. Maximum Likelihood (ML) Decoding… L = 5, 2 L =32 Such an optimal decoder, which minimizes the error probability (for the case where all transmitted sequences are equally likely), is known as a maximum likelihood decoder. In an AWGN channel, which is memoryless, the noise affects each code symbol independently. For a convolutional code of rate 1/n, we can therefore express the likelihood as

17 17 Convolutional Code Maximum Likelihood (ML) Decoding…. where Z i is the i th branch of the received sequence Z, is the i th branch of a particular codeword sequence U (m), z ji ; is the j th code symbol of Z j, and is the j th code symbol of, and each branch comprises n code symbols. Generally, it is computationally more convenient to use the logarithm of the likelihood function since this permits the summation, instead of the multiplication of terms. So log-likelihood function can be expressed as Since for a binary code, the number of possible sequences made up of L branch words is 2 L, maximum likelihood decoding of such a received sequence, using a tree diagram, requires the “brute force” or exhaustive comparison of 2 L accumulated log-likelihood metrics, representing all the possible different codeword sequences that could have been transmitted. Hence it is not practical to consider maximum likelihood decoding with a tree structure.

18 18 With the trellis representation of the code, it is possible to configure a decoder which can discard the paths that could not possibly be candidates for the maximum likelihood sequence. The decoded path is chosen from some reduced set of surviving paths. Such a decoder is still optimum in the sense that the decoded path is the same as the decoded path obtained from a "brute force'' maximum likelihood decoder. But the early rejection of unlikely paths reduces the decoding complexity. Convolutional Code Fig. 13: The early rejection of unlikely paths of a trellis diagram, reduces the decoding complexity and allows us to avoid brute force comparison. Maximum Likelihood (ML) Decoding….

19 Hard versus Soft Decisions 19 Convolutional Code Fig. 14: Demodulator and decoder in a receiver and an angry face. Information Sink Information Sink Rate 1/n Convolutional Decoder Rate 1/n Convolutional Decoder Demodulator AWGN Channel How old I am?! Decoder plays an important role in estimating the transmitted bit. For example, You (decoder) need to know the age of Mr. X. As a hard decision decoder you can only 1/2 questions. If you ask- Is your age is more than 40? Mr. X will answer :Yes (or No.). So asking very few questions you will not estimate one’s age exactly. But if you are allowed to ask many questions, then you may find the age exactly. Mr. X

20 20 Convolutional Code Fig. 15: Hard and soft decision decoding, and quantization levels. 000 001 010 011 100 101 110 111 Likelihood of s 2 p(z|s 2 ) Likelihood of s 1 p(z|s 1 ) 0 1 Hard-decision decoding: the output of the demodulator is quantized to two levels, zero and one and fed into the decoder to decide. It provides no other information for the decoder such as how reliable the decision is. Soft-decision decoding: the output of the demodulator is quantized to greater than two levels, and fed into the decoder to decide. It also provides measure of confidence. In soft decision decoding, performance will improve 2 dB. If there are infinite quantization levels, performance will improve up 2.2 dB at a cost of large memory and speed (more speed is required to process multiple bits of a quantization level). [soft decision decoding is not used in block code because of complexity] Hard versus Soft Decisions… AWGNAWGN In fading channel 6 dB improvement can be obtained by soft decision decoding.

21 Log-likelihood Function 21 Convolutional Code Fig. 16: U and Z of the Tx and Rx. Binary Symmetric Channel and word length. Modulator (Tx) Modulator (Tx) Demodulator (Rx) Demodulator (Rx) AWGN Channel AWGN Channel U= (U 1, U 2,..) Z= (Z 1, Z 2,..) U= Z when there is no error. 00 11 1-p p p Transmitted signals Received signals P(1|0) = P(0|1) = p P(1|1) = P(0|0) = 1-p 1234 ………. L-1 L d m L-d m bits differ same bits Suppose that U (m) and Z are each L-bit-long sequences and that they differ in d m positions [i.e., the Hamming distance between U (m) and Z is d m ]. Then, since the probability that this U (m) was transformed to the Z at distance d m - can be written as Minimize the distance corresponds maximum likelihood BSC is an example of hard decision channel

22 22 Convolutional Code Log-likelihood Function Gaussian Channel For a Gaussian channel, each output symbol z ji of demodulator is a value from a continuous alphabet. So at the detector, z ji cannot labeled as a correct or incorrect. Sending the decoder such soft decisions can be viewed as sending a family of conditional probabilities of the different symbols. Maximizing P(Z|U (m) ) is equivalent to maximizing the inner product or correlation between the code word sequence U (m) and the analog-valued received sequence Z. It will have maximum value if the Euclidean distance between z and u is minimum. But to process this signal using digital system, receiver needs to quantize it. This quantized Gaussin channel, known as soft-decision channel, is the channel model assumed for soft decision decoding.

23 Viterbi Convolutional Decoding Algorithm 23 The Viterbi algorithm performs maximum likelihood decoding. It reduces the computational load by taking advantage of the trellis diagram. The advantage of Viterbi decoding, compared with Brute-force decoding, is that the complexity of a Viterbi decoder is not a function of the number of symbols. Convolutional Code The algorithm involves Calculating a measure of similarity (distance), between the received signal at time t i. The Viterbi algorithm removes the trellis paths that could not be candidates for the maximum likelihood choice. When two paths enter the same state, the one having the best metric (minimum distance) is chosen; this path is called the surviving path. This selection of surviving paths is performed for all the states. The decoder continues in this way and making decisions by eliminating the least likely paths. The early rejection of the unlikely paths reduces the decoding complexity.

24 Viterbi Algorithm 24 Initial setup  For a data block of L bits, form the trellis.  The trellis has L+K-1 sections or levels and starts at time t 1 and ends up at time t L+K.  Label all the branches in the trellis with their corresponding branch metric.  For each state [in the trellis which is denoted by S(t i ) {0, 1, 2, … 2 K-1 } at the time t i ] define a parameter Γ( S(t i ), t i ). Convolutional Code Now follow steps:  Set Γ( 0, t 1 ) = 0.  At time t i, compute the partial path metrics for all the paths entering each state.  Set Γ( S(t i ), t i ) equal to the best partial path metric entering each state at time t i.  Keep the survivor path and delete the dead paths from the trellis.  If i<L+K, increase i by 1 and return to step 2.  After time t L+K, from state zero to last follow the surviving branches through the trellis. The path found is unique and corresponds to the ML codeword.

25 25 Convolutional Code Fig. 17: Convolutional decoding. 2/00 1/00 1101 1001 1101 0001 U 0/11 2/10 1/11 2/10 0/01 1/11 0/01 1/00 1/11 2/10 0/10 2/10 1/11 1/00 0/01 2/01 0/01 2/01 0/01 0/102/10 t 1 t 2 t 3 t 4 t 5 t 6 Z 11011 m 2 0 3 3 2 0 a = 00 b = 10 d = 01 3 3 0 2 Add-Compare-Select Decoding

26 Add-Compare-Select Decoding…. 26 Convolutional Code Fig. 18: Convolutional decoding. 2/00 1/00 1101 1001 1101 0001 U 0/11 2/10 1/11 2/10 0/01 1/11 0/01 1/00 1/11 2/10 0/10 2/10 1/11 1/00 0/01 2/01 0/01 2/01 0/01 0/102/10 t 1 t 2 t 3 t 4 t 5 t 6 Z 11011 m 2 0 3 3 2 0 a = 00 b = 10 c = 01 d = 01 3 3 0 2 1 1 3 2

27 Add-Compare-Select Decoding… 27 Convolutional Code Fig. 19: Convolutional decoding. 2/00 1/00 0/11 2/10 1/11 2/10 0/01 1/11 0/01 1/00 1/11 2/10 0/10 2/10 1/11 1/00 0/01 2/01 0/01 2/01 0/01 0/102/10 t 1 t 2 t 3 t 4 t 5 t 6 1101 1001 1101 0001 U Z 11011 m 2 0 3 3 2 0 a = 00 b = 10 c = 01 d = 01 3 3 0 2 1 1 3 2 2 2 2 1

28 Viterbi Decoder 28 The storage requirements of the Viterbi decoder grow exponentially with constraint length K. The amount of path storage required is u =h 2 K-1, where h is the length information per state. Convolutional Code Fig. 20: Trellis diagram of all zeros path. The minimum distance path that diverges and remerges is known as the minimum free distance or simply the free distance. It is denoted by d f. The number of error that can be corrected is given by 1 0 0 0 1 0 0

29 Catastrophic Error 29 Convolutional Code Fig. 21: Encoder and state diagram for catastrophic error. 00 1 u1u1 u2u2 i/p o/po/p A catastrophic error is defined as an event whereby a finite number of code symbol errors cause an infinite number of decoded data bit errors. Reasons Generators, g(X), have common polynomial factor. Example: g 1 (X)=1+X g 2 (X)=1+X 2 = (1+X) (1+X)

30 Performance and Gain for Convolutional Codes 30 X b = D 2 X a + X c X c = D X b + DX d X d = DX b + DX d X e = D 2 X c X c = DX b + DX d X c (1-2D) = D 3 X a Convolutional Code Fig. 22: State diagram. If N be a factor in all branch transitions caused by the input bit one, T(D,N) can be written as The upper bound of the probability of bit error, P B for hard decision decoding and p probability of channel symbol error can be given by Here For BPSK and hard decision decoding X is the dummy variable

31 31 Coding gain: the reduction of E b /N 0 for a given probability of error is called coding gain. Convolutional Code Fig. 23: Probability of bit error convolutional codes for different values of K. Performance and Gain for Convolutional Codes…. The upper bound for coding gain in dB is given by Where r is the code rate and d f is the free distance. Increase in o/p code word, n Increase in Performance Increase in Performance Decrease in i/p code word, k Decrease in code rate, r Good modulation, coherent PSK Increase in K

32 Best Known Convolutional Codes 32 Criteria for the best convolutional codes  It should not have catastrophic error propagation  It has the maximum free distance. Convolutional Code Table. 1: Some convolutional codes that perform the best. Rate Constraint Length Free Distance Code Vector 1/235 g 1 =111 g 2 =101 1/246 1111 1011 1/3410 1111 1011 1101

33 Soft-Decision Viterbi Decoding 33 In this soft-decision decoding, the demodulator no longer delivers firm decisions. It delivers quantized noisy signals (soft decisions) to the decoder. Instead of Hamming distance, Euclidean distance is used. Or it uses the maximum correlation rather than minimum distance. Convolutional Code Fig. 24: Correlation of a function.

34 Sequential Decoding 34 Prior to the discovery of the Viterbi algorithm, other algorithms had been proposed for decoding convolutional codes. The earliest was the sequential decoding algorithm. The decoder starts at the time t 1 node of the tree and generates both paths leaving that node. The decoder follows that path which agrees with the received n code symbols. At the next level in the tree, the decoder again generates both paths leaving that node and follows the path agreeing with the second group of n code symbols. Proceeding in this manner, the decoder quickly penetrates the tree. Convolutional Code If the received n code coincide with one of the generated paths, the decoder follows that path. If there is no agreement, the decoder follows the most likely path but keeps a cumulative count on the number of disagreements between the received symbols and the branch words on the path being followed. If two branches appear equally likely, the receiver uses an arbitrary rule such as following the zero input path. At each new level in the tree, the decoder generates new branches and compares them with the next set of n received code symbols. The search continues to penetrate the tree along the most likely path and maintains the cumulative disagreement count. If the disagreement count exceeds a certain number, the decoder decides that it is on an incorrect path, backs out of the path, and tries another.

35 35 Convolutional Code Fig. 25: Tree diagram for sequential decoding. Sequential Decoding t 1 t 2 t 3 t 4 t 5 It works on trial and error basis.  Generates both paths.  Follow that path which agrees.  For no agreement, the decoder follows the most likely path and keeps a cumulative count of disagreements.  For two equally likely paths, following the zero input path.  Disagreement count exceeds a certain number, the decoder decides that it is on an incorrect path, backs out of the path, and tries another.

36 Comparisons and Limitations of Viterbi and Sequential Decoding 36 In the Viterbi algorithm, error probability decreases exponentially with the increase of constraint length, the number of code states and decoder complexity. The number of states searched is independent of constraint length in Sequential decoding. Convolutional Code Fig. 26: Comparisons of Viterbi and sequential decoding.

37 Feedback Decoding 37 First branch, the decoder computes 2 L, eight cumulative Hamming path metrics. Compare which four value has minimum. If lower path is minimum, 1 is decoded. If upper path is minimum, 0 is decoded. Then again generate eight cumulative Hamming path metrics. And moves on as before. Upper half metrics: 3, 3, 6, 4. Lower half metrics: 2, 2, 1, 3. So at t 1 1 is received as decoded data. Almost performs like Viterbi algorithm. t = (d f -1)/2 But if L increases, complexity increases. Convolutional Code Fig. 27:Feedback decoding. t 1 t 2 t 3 t 4 t 5 1 01/2 11/0 11/2 00/1 00/2 00/0 01/0 01/1 01/0 01/2 01/1 3 2 2 4 6 3 3

38 References 38 Convolutional Code [1] Dennis Roddy, J. Coolen, “Electronic Communications,” Prentice-Hall, Virginia, 2nd ed., 1995. [2] Bernard Sklar, “ Digital Communications - Fundamentals and Applications,” Prentice Hall, New jersey, 2nd ed., 2001. [3]www. en.wikipedia.org, on April 11, 2013.

39 Question 39 Q: Draw a conv0lutional encoder for constraint length 3 and find the encoded code word for any 3-bit data word and show that convolutional codes are linear-just like linear block codes. Convolutional Code Thank You. Question & Answer


Download ppt "MD. TARIQ HASAN SoC Design LAB Department of Information and Communication Engineering College of Electronics and Information Engineering Chosun University."

Similar presentations


Ads by Google