1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation.

Slides:



Advertisements
Similar presentations
Convolutional Codes Mohammad Hanaysheh Mahdi Barhoush.
Advertisements

Noise-Predictive Turbo Equalization for Partial Response Channels Sharon Aviran, Paul H. Siegel and Jack K. Wolf Department of Electrical and Computer.
Iterative Equalization and Decoding
Convolutional Codes Representation and Encoding  Many known codes can be modified by an extra code symbol or by deleting a symbol * Can create codes of.
Decoding of Convolutional Codes  Let C m be the set of allowable code sequences of length m.  Not all sequences in {0,1}m are allowable code sequences!
6.375 Project Arthur Chang Omid Salehi-Abari Sung Sik Woo May 11, 2011
Maximum Likelihood Sequence Detection (MLSD) and the Viterbi Algorithm
Submission May, 2000 Doc: IEEE / 086 Steven Gray, Nokia Slide Brief Overview of Information Theory and Channel Coding Steven D. Gray 1.
Cellular Communications
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Turbo Codes – Decoding and Applications Bob Wall EE 548.
Figure 6.1. A convolutional encoder. Figure 6.2. Structure of a systematic convolutional encoder of rate.
EE436 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
Improving the Performance of Turbo Codes by Repetition and Puncturing Youhan Kim March 4, 2005.
林茂昭 教授 台大電機系 個人專長 錯誤更正碼 數位通訊
ECED 4504 Digital Transmission Theory
S Advanced Digital Communication (4 cr)
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
1 (Chapter 15): Concatenated codes Simple (classical, single-level) concatenation Length of concatenated code: n 1 n 2 Dimension of concatenated code:
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
1 –Mandatory exercise for Inf 244 –Deadline: October 29th –The assignment is to implement an encoder/decoder system.
A Novel technique for Improving the Performance of Turbo Codes using Orthogonal signalling, Repetition and Puncturing by Narushan Pillay Supervisor: Prof.
Wireless Mobile Communication and Transmission Lab. Theory and Technology of Error Control Coding Chapter 5 Turbo Code.
Medicaps Institute of Technology & Management Submitted by :- Prasanna Panse Priyanka Shukla Savita Deshmukh Guided by :- Mr. Anshul Shrotriya Assistant.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
Basic Characteristics of Block Codes
Introduction of Low Density Parity Check Codes Mong-kai Ku.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
DIGITAL COMMUNICATIONS Linear Block Codes
Turbo Codes COE 543 Mohammed Al-Shammeri. Agenda PProject objectives and motivations EError Correction Codes TTurbo Codes Technology TTurbo decoding.
Information Theory Linear Block Codes Jalal Al Roumy.
1 Coded modulation So far: Binary coding Binary modulation Will send R bits/symbol (spectral efficiency = R) Constant transmission rate: Requires bandwidth.
Real-Time Turbo Decoder Nasir Ahmed Mani Vaya Elec 434 Rice University.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Last time, we talked about:
Low Density Parity Check codes
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
Error Correction Code (2)
Error Detection and Correction – Hamming Code
A simple rate ½ convolutional code encoder is shown below. The rectangular box represents one element of a serial shift register. The contents of the shift.
Turbo Codes. 2 A Need for Better Codes Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. Lower rate Codes.
Wireless Communication Research Lab. CGU What is Convolution Code? 指導教授:黃文傑 博士 學生:吳濟廷
Dr. Muqaibel \ EE430 Convolutional Codes 1 Convolutional Codes.
Log-Likelihood Algebra
SNS COLLEGE OF ENGINEERING Department of Electronics and Communication Engineering Subject: Digital communication Sem: V Convolutional Codes.
Implementation of Turbo Code in TI TMS320C8x Hao Chen Instructor: Prof. Yu Hen Hu ECE734 Spring 2004.
1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding.
Classical Coding for Forward Error Correction Prof JA Ritcey Univ of Washington.
FEC decoding algorithm overview VLSI 자동설계연구실 정재헌.
MD. TARIQ HASAN SoC Design LAB Department of Information and Communication Engineering College of Electronics and Information Engineering Chosun University.
296.3:Algorithms in the Real World
The Viterbi Decoding Algorithm
What is this “Viterbi Decoding”
MAP decoding: The BCJR algorithm
Pipelined Architectures for High-Speed and Area-Efficient Viterbi Decoders Chen, Chao-Nan Chu, Hsi-Cheng.
Coding and Interleaving
S Digital Communication Systems
COS 463: Wireless Networks Lecture 9 Kyle Jamieson
Error Correction Code (2)
Error Correction Code (2)
Time Varying Convolutional Codes for Punctured Turbocodes
Error Correction Code (2)
Homework #2 Due May 29 , Consider a (2,1,4) convolutional code with g(1) = 1+ D2, g(2) = 1+ D + D2 + D3 a. Draw the.
IV. Convolutional Codes
Presentation transcript:

1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do not try several generator matrices for the same code? Avoid nonminimal matrices (and of course catastrophic ones)

2 Code design: Computer search (2)* High rate codes Represent code by its parity check matrix Find one representative for each equivalence class of codes Avoid nonminimal matrices Problem: Branch complexity of rate k/n codes One solution: Use a bit-oriented trellis Can limit complexity of bit-level trellis to + , 0    min(n-k,k) Reference: E. Rosnes and Ø. Ytrehus, Maximum length convolutional codes under a trellis complexity constraint, J. of Complexity 20 (2004)

3 Code design/ Bounds on codes* Heller bound: d free  best minimum distance in any block code of the same parameters as any truncated/terminated code McEliece: For rate (n-1)/n codes of d free =5, n  2 ( +1)/2 Sphere packing bounds Applies to codes of odd d free Similar to the Hamming/sphere packing bound for block codes Corollary: For rate (n-1)/n codes of d free =5, n  2 /2 Reference: E. Rosnes and Ø. Ytrehus, Sphere packing bounds for convolutional codes, IEEE Trans. On Information Theory, to appear (2004)

4 QLI codes Skip this.

5 Code design: Construction from block codes Justesen: Constr 12.1 …skip Constr 12.2 …skip Other attempts Wyner code d=3 Generalizations d=3 and 4. Can be shown to be optimum Reference: E. Rosnes and Ø. Ytrehus, Maximum length convolutional codes under a trellis complexity constraint, J. of Complexity 20 (2004) Other algebraic (”BCH-like”) constructions (not in book) Warning: Messy

6 Implementation issues Decoder memory Metrics dynamic range (normalization)* Path memory Decoder synchronization Receiver quantization

7 Decoder memory Limit to how large can be In practice is seldom more than 4 (16 states) Exists implementation with = 14 (BVD)

8 Dynamic range of metrics Normalize metrics values after each time instant!

9 Path memory For large information length: Impractical to store complete path before backtracing Requires a large amount of memory Imposes delay Practical solution: Select integer  After  blocks: Start backtracing by either Starting from the survivor in an arbitrary state, or the survivor in the state with best metrics Try backtracing from all states, and select the first k-bit block that occurs most frequently in these backtracings This determines the first k-bit information block. Subsequent information blocks are decoded successively in the same way

10 Truncation of the decoder Not maximum likelihood The truncated decoder can make two kinds of errors E ML = The kind of error that an ML decoder would also make Associated with low-weight paths (paths of weight d free, d free +1, …) P(E ML )  A d free  P d free where P d depends on the channel and is decreasing rapidly as the argument d increases E T = The kind of error that are due to truncation, and which an ML decoder would not make Why would an ML decoder not make those errors? Because if the decoding decision is allowed to ”mature”, the erroneous codewords will not preferred over the correct one. However, the truncated decoder enforces an early decision and thus makes mistakes

11 Truncation length Assume the all zero word is the correct word P(E T )  A d(  )  P d(  ),where d(  ) is the minimum weight of a path that leaves state zero at time 1 and that is still unmerged at time  Want to select d(  ) such that P(E ML ) >> P(E T ) The minimum length  such that d(  ) > d free is called the truncation length. Often 4-5 times the memory m. Determined e. g. by modified Viterbi algorithm

12 Example

13 R=1/2, =4 code, varying path memory

14 Synchronization Symbol synchronization: Which n bits belong together as a branch label? Metrics evaluation: If metrics are consistently bad this insicates loss of synchronization May be enhanced by code construction* May embed special synchronization patterns in the transmitted sequence What if the noise is bad? Decoder: Do we know that the encoder start in the zero state? Not always Solution: Discard the first (5m) branches

15 Quantization Skip this

16 Other complexity issues Branch complexity (of high rate encoders) Punctured codes Bit level encoders (equivalence to punctured codes)* Syndrome trellis decoding (S. Riedel)* Speedup: Parallell processing One processor per node Several time instants at a time / pipelining (Fettweiss)* Differential Viterbi decoding

17 SISO decoding algorithms Two new elements Soft-in Soft-out (SISO) Allows non-uniform input probability The a priori information bit probabilities P(u l ) may be different Applications: Turbo decoding Two most used algorithms: The Soft Output Viterbi Algorithm The BCJR algorithm (APP, MAP)

18 SOVA

19 SOVA (continued)

20 SOVA (continued)

21 SOVA (continued)

22 SOVA Update

23 SOVA Finite path memory: Straightforward Complexity vs. Complexity of Viterbi Algorithm: Modest increase Storage complexity: Need to store reliability vectors in addition to metrics and survivor pointers Need to update the reliability vector Provides ML decoding (Minimizes WER) and reliability measure Not MAP: Does not minimize BER

24 Suggested exercises