ECED 4504 Digital Transmission Theory

Slides:



Advertisements
Similar presentations
Convolutional Codes Mohammad Hanaysheh Mahdi Barhoush.
Advertisements

Noise-Predictive Turbo Equalization for Partial Response Channels Sharon Aviran, Paul H. Siegel and Jack K. Wolf Department of Electrical and Computer.
Decoding of Convolutional Codes  Let C m be the set of allowable code sequences of length m.  Not all sequences in {0,1}m are allowable code sequences!
6.375 Project Arthur Chang Omid Salehi-Abari Sung Sik Woo May 11, 2011
Maximum Likelihood Sequence Detection (MLSD) and the Viterbi Algorithm
Submission May, 2000 Doc: IEEE / 086 Steven Gray, Nokia Slide Brief Overview of Information Theory and Channel Coding Steven D. Gray 1.
Cellular Communications
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Turbo Codes – Decoding and Applications Bob Wall EE 548.
Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen.
Figure 6.1. A convolutional encoder. Figure 6.2. Structure of a systematic convolutional encoder of rate.
ECE 559 VLSI – Design Project Viterbi Decoder VLSI Design Project Spring 2002 Dan Breen Keith Grimes Damian Nowak David Rust Advisor: Prof. Goeckel.
EC 723 Satellite Communication Systems
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
INFORMATION THEORY BYK.SWARAJA ASSOCIATE PROFESSOR MREC.
S Advanced Digital Communication (4 cr)
1 S Advanced Digital Communication (4 cr) Cyclic Codes.
Multilevel Coding and Iterative Multistage Decoding ELEC 599 Project Presentation Mohammad Jaber Borran Rice University April 21, 2000.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
S Digital Communication Systems The Viterbi algorithm, optimum coherent bandpass modulation.
1 SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Cyclic Codes.
S Transmission Methods in Telecommunication System (5cr) Error Control Coding.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
Baseband Demodulation/Detection
Wireless Mobile Communication and Transmission Lab. Theory and Technology of Error Control Coding Chapter 5 Turbo Code.
Medicaps Institute of Technology & Management Submitted by :- Prasanna Panse Priyanka Shukla Savita Deshmukh Guided by :- Mr. Anshul Shrotriya Assistant.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
S Digital Communication Systems Course Overview, Basic Characteristics of Block Codes.
Basic Characteristics of Block Codes
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
DIGITAL COMMUNICATIONS Linear Block Codes
ECE 4371, Fall, 2014 Introduction to Telecommunication Engineering/Telecommunication Laboratory Zhu Han Department of Electrical and Computer Engineering.
EE 3220: Digital Communication
Digital Communications Chapeter 3. Baseband Demodulation/Detection Signal Processing Lab.
1 Coded modulation So far: Binary coding Binary modulation Will send R bits/symbol (spectral efficiency = R) Constant transmission rate: Requires bandwidth.
Real-Time Turbo Decoder Nasir Ahmed Mani Vaya Elec 434 Rice University.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Last time, we talked about:
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
Error Correction Code (2)
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
A simple rate ½ convolutional code encoder is shown below. The rectangular box represents one element of a serial shift register. The contents of the shift.
Wireless Communication Research Lab. CGU What is Convolution Code? 指導教授:黃文傑 博士 學生:吳濟廷
Dr. Muqaibel \ EE430 Convolutional Codes 1 Convolutional Codes.
Log-Likelihood Algebra
SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Convolutional Codes.
Convolutional Coding In telecommunication, a convolutional code is a type of error- correcting code in which m-bit information symbol to be encoded is.
Huffman Coding (2 nd Method). Huffman coding (2 nd Method)  The Huffman code is a source code. Here word length of the code word approaches the fundamental.
Error Control Coding. Purpose To detect and correct error(s) that is introduced during transmission of digital signal.
Interleaving Compounding Packets & Convolution Codes
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3c: Signal Detection in AWGN.
1 Channel Coding: Part III (Turbo Codes) Presented by: Nguyen Van Han ( ) Wireless and Mobile Communication System Lab.
1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
FEC decoding algorithm overview VLSI 자동설계연구실 정재헌.
MD. TARIQ HASAN SoC Design LAB Department of Information and Communication Engineering College of Electronics and Information Engineering Chosun University.
The Viterbi Decoding Algorithm
What is this “Viterbi Decoding”
Modulation and Coding Schemes
Coding and Interleaving
S Digital Communication Systems
COS 463: Wireless Networks Lecture 9 Kyle Jamieson
Su Yi Babak Azimi-Sadjad Shivkumar Kalyanaraman
Error Correction Code (2)
Error Correction Code (2)
Error Correction Code (2)
COS 463: Wireless Networks Lecture 9 Kyle Jamieson
IV. Convolutional Codes
Presentation transcript:

ECED 4504 Digital Transmission Theory Decoding of Convolutional Codes

Topics today Revision of convolutional encoding: state diagram and generator matrix ML decoding of convolutional codes and the free distance Viterbi decoding trellis diagram surviving path ending the decoding Soft and hard decoding Bounding error rate for convolutional codes generating function and coding gain weight spectrum error-events and BER

Convolutional encoding Figure shows how in memory depth L=v-1 k input bits are encoded to n output bits in a (n,k,L) code This figure shows a general structure of a convolutional encoder

Example of using generator matrix Verify that you can obtain the result shown!

State diagram of a convolutional code Each new block of k bits causes a transition into new state (see -2 slides) Hence there are 2k branches leaving each state Assuming encoder zero initial state, encoded word for any input k bits can thus be obtained. For instance, below for u=(1 1 1 0 1) the encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1) is produced: Verify that you have the same result! Input state Output state Encoder state diagram for an (n,k,L)=(2,1,2) coder

Extracting the generating function by splitting and labeling the state diagram The state diagram can be modified to yield information on code distance properties Rules: (1) Split S0 into initial and final state, remove self-loop (2) Label each branch by the branch gain Xi. Here i is the weight of the n encoded bits on that branch (3) Each path connecting the initial state and the final state represents a nonzero code word that diverges and re-emerges with S0 only once The path gain is the product of the branch gains along a path, and the code weight is the power of X in the path gain Code weigh distribution is obtained by using a weighted gain formula to compute its generating function (input-output equation) where Ai is the number of encoded words of weight i

Where these terms come from? Example of splitting and labeling the state diagram weight: 2 weight: 1 The path representing the state sequence S0S1S3S7S6S5S2S4S0 has path gain X2X1X1X1X2X1X2X2=X12 and the corresponding code word has the weight 12. The generating function is: Where these terms come from?

Distance properties of convolutional codes Code strength is measured by the minimum free distance: where w(X) is weight of the entire encoded sequence X generated by a message sequence The minimum free distance denotes: The minimum weight of all the paths in the state diagram that diverge from and remerge with the all-zero state S0 The lowest power of the Generating Function T(X): Coding gain:

Decoding convolutional codes Maximum likelihood decoding of convolutional codes means finding the code branch in the code trellis that was most likely transmitted Therefore maximum likelihood decoding is based on calculating code Hamming distances dfree for each branch forming encoded word Assume that information symbols applied into a AWGN channel are equally alike and independent Let’s denote by x the message bits (no errors) and by y the decoded bits: Probability to decode the sequence y provided x was transmitted is then The most likely path through the trellis will maximize this metric Also, the following metric is maximized (prob.<1) that can alleviate computations: received bits: Decoder non-erroneous bits: bit decisions

Example of exhaustive maximal likelihood detection Assume a three bit message is to transmitted. To clear the encoder two zero-bits are appended after message. Thus 5 bits are inserted into encoder and 10 bits produced. Assume channel error probability is p=0.1. After the channel 10,01,10,11,00 is produced. What comes after decoder, e.g. what was most likely the transmitted sequence?

weight for prob. to receive bit in-error correct errors

The largest metric, verify that you get the same result! Note also the Hamming distances!

Soft and hard decoding Transition for Pr[3|0] is indicated Regardless whether the channel outputs hard or soft decisions the decoding rule remains the same: maximize the probability However, in soft decoding decision region energies must be accounted for, and hence Euclidean metric dE, rather that Hamming metric dfree is used Transition for Pr[3|0] is indicated by the arrow

Decision regions Coding can be realized by soft-decoding or hard-decoding principle For soft-decoding reliability (measured by bit-energy) of decision region must be known Example: decoding BPSK-signal: Matched filter output is a continuos number. In AWGN matched filter output is Gaussian For soft-decoding several decision region partitions are used Transition probability for Pr[3|0], e.g. prob. that transmitted ‘0’ falls into region no: 3

The Viterbi algorithm Exhaustive maximum likelihood method must search all paths in phase trellis for 2k bits for a (n,k,L) code By Viterbi-algorithm search depth can be decreased to comparing surviving paths where 2L is the number of nodes and 2k is the number of branches coming to each node (see the next slide!) Problem of optimum decoding is to find the minimum distance path from the initial stage back to initial stage (below from S0 to S0). The minimum distance is the sum of all path metrics that is maximized by the correct path The Viterbi algorithm gets its efficiency via concentrating into survivor paths of the trellis Channel output sequence at the RX TX Encoder output sequence for the m:th path

The survivor path Assume for simplicity a convolutional code with k=1, and up to 2k = 2 branches can enter each stage in trellis diagram Assume optimal path passes S. Metric comparison is done by adding the metric of S into S1 and S2. At the survivor path the accumulated metric is naturally smaller (otherwise it could not be the optimum path) For this reason the non-survived path can be discarded -> all path alternatives need not to be considered Note that in principle whole transmitted sequence must be received before decision. However, in practice storing of states for input length of 5L is quite adequate

Example of using the Viterbi algorithm Assume received sequence is and the (n,k,L)=(2,1,2) encoder shown below. Determine the Viterbi decoded output sequence! (Note that for this encoder code rate is 1/2 and memory depth L = 2)

The maximum likelihood path Smaller accumulated metric selected After register length L+1=3 branch pattern begins to repeat 1 1 (Branch Hamming distance in parenthesis) First depth with two entries to the node The decoded ML code sequence is 11 10 10 11 00 00 00 whose Hamming distance to the received sequence is 4 and the respective decoded sequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum distance path. (Black circles denote the deleted branches, dashed lines: '1' was applied)

How to end-up decoding? In the previous example it was assumed that the register was finally filled with zeros thus finding the minimum distance path In practice with long code words zeroing requires feeding of long sequence of zeros to the end of the message bits: wastes channel capacity & introduces delay To avoid this path memory truncation is applied: Trace all the surviving paths to the depth where they merge Figure right shows a common point at a memory depth J J is a random variable whose magnitude shown in the figure (5L) has been experimentally tested for negligible error rate increase Note that this also introduces the delay of 5L!

(the weight spectrum) at Error rate of convolutional codes: Weight spectrum and error-event probability Error rate depends on channel SNR input sequence length, number of errors is scaled to sequence length code trellis topology These determine which path in trellis was followed while decoding An error event happens when an erroneous path is followed by the decoder All the paths producing errors must have a distance that is larger than the path having distance dfree, e.g. there exists the upper bound for following all the erroneous paths (error-event probability): Probability of the path at the Hamming distance d Number of paths (the weight spectrum) at the Hamming distance d

Selected convolutional code gains Probability to select a path at the Hamming distance d depends on decoding method. For antipodal (polar) signaling in AWGN channel it is that can be further simplified for low error probability channels by remembering that then the following bound works well: Here is a table of selected convolutional codes and their associative code gains Gc Gc=RCdf /2 (df = dfree)

The error-weighted distance spectrum and the bit-error rate BER is obtained by multiplying the error-event probability by the number of data bit errors associated with the each error event Therefore the BER is upper bounded (for instance for polar signaling) by where ed is the error-weighted distance spectrum where ad is the number of paths (the weight spectrum) at the Hamming distance d is the number of data-bit errors for the path at the Hamming distance d Note: This bound is very loose for low SNR channels. It has been found by simulations that partial bounds, eg taking 3 - 10 terms of the summation of pb expression above yields good estimate to around BER<10-2 error rates