Dr. Muqaibel \ EE430 Convolutional Codes 1 Convolutional Codes.

Slides:



Advertisements
Similar presentations
Convolutional Codes Mohammad Hanaysheh Mahdi Barhoush.
Advertisements

Name Convolutional codes Tomashevich Victor. Name- 2 - Introduction Convolutional codes map information to code bits sequentially by convolving a sequence.
Convolutional Codes Representation and Encoding  Many known codes can be modified by an extra code symbol or by deleting a symbol * Can create codes of.
Decoding of Convolutional Codes  Let C m be the set of allowable code sequences of length m.  Not all sequences in {0,1}m are allowable code sequences!
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
6.375 Project Arthur Chang Omid Salehi-Abari Sung Sik Woo May 11, 2011
Maximum Likelihood Sequence Detection (MLSD) and the Viterbi Algorithm
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
Error detection/correction FOUR WEEK PROJECT 1 ITEMS TO BE DISCUSSED 1.0 OVERVIEW OF CODING STRENGTH (3MINS) Weight/distance of binary vectors Error detection.
CSE 830: Design and Theory of Algorithms
6/20/2015List Decoding Of RS Codes 1 Barak Pinhas ECC Seminar Tel-Aviv University.
Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen.
Figure 6.1. A convolutional encoder. Figure 6.2. Structure of a systematic convolutional encoder of rate.
ECE 559 VLSI – Design Project Viterbi Decoder VLSI Design Project Spring 2002 Dan Breen Keith Grimes Damian Nowak David Rust Advisor: Prof. Goeckel.
An FPGA Based Adaptive Viterbi Decoder Sriram Swaminathan Russell Tessier Department of ECE University of Massachusetts Amherst.
EC 723 Satellite Communication Systems
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
#4 1 Victor S. Frost Dan F. Servey Distinguished Professor Electrical Engineering and Computer Science University of Kansas 2335 Irving Hill Dr. Lawrence,
ECED 4504 Digital Transmission Theory
S Advanced Digital Communication (4 cr)
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Equalization. Fig. Digital communication system using an adaptive equaliser at the receiver.
1 Systematic feedback (recursive) encoders G’(D) = [1,(1 + D 2 )/(1 + D + D 2 ),(1 + D)/(1 + D + D 2 ) ] Infinite impulse response (not polynomial) Easier.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
S Digital Communication Systems The Viterbi algorithm, optimum coherent bandpass modulation.
Convolutional Codes. p2. OUTLINE  [1] Shift registers and polynomials  [2] Encoding convolutional codes  [3] Decoding convolutional codes  [4] Truncated.
Cyclic Codes for Error Detection W. W. Peterson and D. T. Brown by Maheshwar R Geereddy.
1 SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Cyclic Codes.
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
ERROR CONTROL CODING Basic concepts Classes of codes: Block Codes
Medicaps Institute of Technology & Management Submitted by :- Prasanna Panse Priyanka Shukla Savita Deshmukh Guided by :- Mr. Anshul Shrotriya Assistant.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
DIGITAL COMMUNICATIONS Linear Block Codes
EE 430 \ Dr. Muqaibel Cyclic Codes1 CYCLIC CODES.
1 Coded modulation So far: Binary coding Binary modulation Will send R bits/symbol (spectral efficiency = R) Constant transmission rate: Requires bandwidth.
Real-Time Turbo Decoder Nasir Ahmed Mani Vaya Elec 434 Rice University.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Last time, we talked about:
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
Error Detection. Data can be corrupted during transmission. Some applications require that errors be detected and corrected. An error-detecting code can.
Error Correction Code (2)
Error Detection and Correction
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
A simple rate ½ convolutional code encoder is shown below. The rectangular box represents one element of a serial shift register. The contents of the shift.
Wireless Communication Research Lab. CGU What is Convolution Code? 指導教授:黃文傑 博士 學生:吳濟廷
SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Convolutional Codes.
Convolutional Coding In telecommunication, a convolutional code is a type of error- correcting code in which m-bit information symbol to be encoded is.
Interleaving Compounding Packets & Convolution Codes
1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation.
FEC decoding algorithm overview VLSI 자동설계연구실 정재헌.
MD. TARIQ HASAN SoC Design LAB Department of Information and Communication Engineering College of Electronics and Information Engineering Chosun University.
DIGITAL SYTEM DESIGN MINI PROJECT CONVOLUTION CODES
The Viterbi Decoding Algorithm
What is this “Viterbi Decoding”
S Digital Communication Systems
COS 463: Wireless Networks Lecture 9 Kyle Jamieson
Subject Name: Information Theory Coding Subject Code: 10EC55
Error Correction Code (2)
Error Correction Code (2)
Chapter 6 Network Flow Models.
Error Correction Code (2)
COS 463: Wireless Networks Lecture 9 Kyle Jamieson
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
Error Correction Coding
IV. Convolutional Codes
Presentation transcript:

Dr. Muqaibel \ EE430 Convolutional Codes 1 Convolutional Codes

Dr. Muqaibel \ EE430 Convolutional Codes 2 Basic Definitions k =1, n = 2, (2,1) Rate-1/2 convolutional code Two-stage register ( M=2 ) Each input bit influences the output for 3 intervals (K=3) K = constraint length of the code = M + 1

Dr. Muqaibel \ EE430 Convolutional Codes 3 Generator Polynomial A convolutional code may be defined by a set of n generating polynomials for each input bit. For the circuit under consideration: g 1 (D) = 1 + D + D 2 g 2 (D) = 1 + D 2 The set {g i (D)} defines the code completely. The length of the shift register is equal to the highest-degree generator polynomial.

Dr. Muqaibel \ EE430 Convolutional Codes 4 State Diagram Representation The output depends on the current input and the state of the encoder ( i. e. the contents of the shift register).

Dr. Muqaibel \ EE430 Convolutional Codes 5 Trellis Diagram Representation Expansion of state diagram in time.

Dr. Muqaibel \ EE430 Convolutional Codes 6 Decoding A message m is encoded into the code sequence c. Each code sequence represents a path in the trellis diagram. Minimum Distance Decoding –Upon receiving the received sequence r, search for the path that is closest ( in Hamming distance) to r.

Dr. Muqaibel \ EE430 Convolutional Codes 7 The Viterbi Algorithm Walk through the trellis and compute the Hamming distance between that branch of r and those in the trellis. At each level, consider the two paths entering the same node and are identical from this node onwards. From these two paths, the one that is closer to r at this stage will still be so at any time in the future. This path is retained, and the other path is discarded. Proceeding this way, at each stage one path will be saved for each node. These paths are called the survivors. The decoded sequence (based on MDD) is guaranteed to be one of these survivors.

Dr. Muqaibel \ EE430 Convolutional Codes 8 The Viterbi Algorithm (cont ’ d) Each survivor is associated with a metric of the accumulated Hamming distance (the Hamming distance up to this stage). Carry out this process until the received sequence is considered completely. Choose the survivor with the smallest metric.

6.3 The Viterbi Algorithm: The viterbi algorithm is used to decode convolutional codes and any structure or system that can be described by a trellis. It is a maximum likelihood decoding algorithm that selects the most probable path that maximizes the likelihood function. The algorithm is based on add- compare-select the best path each time at each state.

Example: Example: For the convolutional code example in the previous lecture, starting from state zero, Decode the following received sequence. Add the weight of the path at each state Compute the two possible paths at each state and select the one with less cumulative Hamming weight This is called the survival path At the end of the trellis, select the path with the minimum cumulative Hamming weight This is the survival path in this example Decoded sequence is m=[ ]

Dr. Muqaibel \ EE430 Convolutional Codes 11 Distance Properties of Conv. Codes Def: The free distance, d free, is the minimum Hamming distance between any two code sequences. Criteria for good convolutional codes: –Large free distance, d free. –Small Hamming distance (i.e. as few differences as possible ) between the input information sequences that produce the minimally separated code sequences. d inf There is no known constructive way of designing a conv. code of given distance properties. However, a given code can be analyzed to find its distance properties.

Dr. Muqaibel \ EE430 Convolutional Codes 12 Distance Prop. of Conv. Codes (cont ’ d) Convolutional codes are linear. Therefore, the Hamming distance between any pair of code sequences corresponds to the Hamming distance between the all-zero code sequence and some nonzero code sequence. Thus for a study of the distance properties it is possible to focus on the Hamming distance between the all-zero code sequence and all nonzero code sequences. The nonzero sequence of minimum Hamming weight diverges from the all-zero path at some point and remerges with the all-zero path at some later point.

Dr. Muqaibel \ EE430 Convolutional Codes 13 Distance Properties: Illustration sequence 2: Hamming weight = 5, d inf = 1 sequence 3: Hamming weight = 7, d inf = 3.

Dr. Muqaibel \ EE430 Convolutional Codes 14 Modified State Diagram The span of interest to us of a nonzero path starts from the 00 state and ends when the path first returns to the 00 state. Split the 00 state (state a) to two states: a 0 and a 1. The branches are labeled with the dummy variables D, L and N, where: The power of D is the Hamming weight (# of 1 ’ s) of the output corresponding to that branch. The power of N is the Hamming weight (# of 1 ’ s) of the information bit(s) corresponding to that branch. The power of L is the length of the branch (always = 1).

Dr. Muqaibel \ EE430 Convolutional Codes 15 Modified State Diagram (cont ’ d)

Dr. Muqaibel \ EE430 Convolutional Codes 16 Properties of the Path Sequence 2: code sequence: state sequence: a 0 b c a 1 Labeled: (D 2 LN)(DL)(D 2 L) = D 5 L 3 N Prop. : w =5, d inf =1, diverges from the allzero path by 3 branches. Sequence 3: code sequence: state sequence: a 0 b d c b c a 1 Labeled: (D 2 LN)(DLN)(DL)(DL)(LN)(D 2 L) = D 7 L 6 N 3 Prop. : w =7, d inf =3, diverges from the allzero path by 6 branches.

Dr. Muqaibel \ EE430 Convolutional Codes 17 Transfer Function Input-Output relations: a 0 = 1 b= D 2 LN a 0 + LNc c= DLb + DLNd d= DLNb + DLNd a 1 = D 2 Lc The transfer function T(D,L,N) = a 1 /a 0

Dr. Muqaibel \ EE430 Convolutional Codes 18 Transfer Function (cont ’ d) Performing long division: T = D 5 L 3 N + D 6 L 4 N 2 + D 6 L 5 N 2 + D 7 L 5 N 3 + …. If interested in the Hamming distance property of the code only, set N = 1 and L = 1 to get the distance transfer function: T (D) = D 5 + 2D 6 + 4D 7 There is one code sequence of weight 5. Therefore d free =5. There are two code sequences of weight 6, four code sequences of weight 7, ….