1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding.

Slides:



Advertisements
Similar presentations
Decoding of Convolutional Codes  Let C m be the set of allowable code sequences of length m.  Not all sequences in {0,1}m are allowable code sequences!
Advertisements

Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Applied Algorithmics - week7
Information and Coding Theory
II. Linear Block Codes. © Tallal Elshabrawy 2 Last Lecture H Matrix and Calculation of d min Error Detection Capability Error Correction Capability Error.
Information Theory Introduction to Channel Coding Jalal Al Roumy.
Maximum Likelihood Sequence Detection (MLSD) and the Viterbi Algorithm
Cellular Communications
TELIN Estimation and detection from coded signals Presented by Marc Moeneclaey, UGent - TELIN dept. Joint research : - UGent.
TELIN Estimation and detection from coded signals Presented by Marc Moeneclaey, UGent - TELIN dept. Joint research : - UGent.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
DIGITAL COMMUNICATION Coding
Error detection/correction FOUR WEEK PROJECT 1 ITEMS TO BE DISCUSSED 1.0 OVERVIEW OF CODING STRENGTH (3MINS) Weight/distance of binary vectors Error detection.
Multiple-input multiple-output (MIMO) communication systems
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
BASiCS Group University of California at Berkeley Generalized Coset Codes for Symmetric/Asymmetric Distributed Source Coding S. Sandeep Pradhan Kannan.
CS151 Complexity Theory Lecture 10 April 29, 2004.
Variable-Length Codes: Huffman Codes
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
Mario Vodisek 1 HEINZ NIXDORF INSTITUTE University of Paderborn Algorithms and Complexity Erasure Codes for Reading and Writing Mario Vodisek ( joint work.
Experts and Boosting Algorithms. Experts: Motivation Given a set of experts –No prior information –No consistent behavior –Goal: Predict as the best expert.
Ger man Aerospace Center Gothenburg, April, 2007 Coding Schemes for Crisscross Error Patterns Simon Plass, Gerd Richter, and A.J. Han Vinck.
exercise in the previous class (1)
ECED 4504 Digital Transmission Theory
Multilevel Coding and Iterative Multistage Decoding ELEC 599 Project Presentation Mohammad Jaber Borran Rice University April 21, 2000.
Exercise in the previous class p: the probability that symbols are delivered correctly C: 1 00 → → → → What is the threshold.
Information Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. Source.
1 (Chapter 15): Concatenated codes Simple (classical, single-level) concatenation Length of concatenated code: n 1 n 2 Dimension of concatenated code:
Codes Codes are used for the following purposes: - to detect errors - to correct errors after detection Error Control Coding © Erhan A. Ince Types: -Linear.
1 –Mandatory exercise for Inf 244 –Deadline: October 29th –The assignment is to implement an encoder/decoder system.
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Introduction to Coding Theory. p2. Outline [1] Introduction [2] Basic assumptions [3] Correcting and detecting error patterns [4] Information rate [5]
Medicaps Institute of Technology & Management Submitted by :- Prasanna Panse Priyanka Shukla Savita Deshmukh Guided by :- Mr. Anshul Shrotriya Assistant.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
Communication System A communication system can be represented as in Figure. A message W, drawn from the index set {1, 2,..., M}, results in the signal.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
DIGITAL COMMUNICATIONS Linear Block Codes
Chapter 31 INTRODUCTION TO ALGEBRAIC CODING THEORY.
Name Iterative Source- and Channel Decoding Speaker: Inga Trusova Advisor: Joachim Hagenauer.
Information Theory Linear Block Codes Jalal Al Roumy.
1 Coded modulation So far: Binary coding Binary modulation Will send R bits/symbol (spectral efficiency = R) Constant transmission rate: Requires bandwidth.
Real-Time Turbo Decoder Nasir Ahmed Mani Vaya Elec 434 Rice University.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Last time, we talked about:
Low Density Parity Check codes
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
Turbo Codes. 2 A Need for Better Codes Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. Lower rate Codes.
Log-Likelihood Algebra
Rate Distortion Theory. Introduction The description of an arbitrary real number requires an infinite number of bits, so a finite representation of a.
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3c: Signal Detection in AWGN.
Exercise in the previous class (1) Define (one of) (15, 11) Hamming code: construct a parity check matrix, and determine the corresponding generator matrix.
Block Coded Modulation Tareq Elhabbash, Yousef Yazji, Mahmoud Amassi.
1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation.
Classical Coding for Forward Error Correction Prof JA Ritcey Univ of Washington.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
FEC decoding algorithm overview VLSI 자동설계연구실 정재헌.
The Viterbi Decoding Algorithm
MAP decoding: The BCJR algorithm
COS 463: Wireless Networks Lecture 9 Kyle Jamieson
Distributed Compression For Binary Symetric Channels
Lecture 15 The Minimum Distance of a Code (Section 4.4)
Error Correction Coding
Theory of Information Lecture 13
IV. Convolutional Codes
Presentation transcript:

1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding and iterative decoding But hard to implement ML Soft Decoding for general codes Performance: Somewhere between HD and SD

2 Correlation Discrepancy Send binary codeword (v 0,...,v n-1 ) Modulate into bipolar (c 0,...,c n-1 ) Receive real vector (r 0,...,r n-1 ) P(r i |v i ) = K  e –(r i -c i ) 2 /N 0 P(r i |v i =1)/P(r i |v i =0) = K  e –(r i -1) 2 /N 0 / K  e –(r i +1) 2 /N 0 log(P(r i |v i =1)/P(r i |v i =0) )  r i. Receive r. Decode to the codeword c that minimizes  i (r i -c i ) 2 =  i r i 2 + n - 2  i r i ·c i Maximize correlation m(r,v) =  i r i ·c i...=  i |r i | - 2  i such that r i ·c i < 0 |r i | Minimize Correlation discrepancy (r,v) =  i such that r i ·c i < 0 |r i |

3 Reliability measures and decoding Consider received vector r For each received symbol r i, form the Hard decision z i Reliability log(P(r i |v i =1)/P(r i |v i =0) )  |r i | As can be expected, z i is more likely to be in error when |r i | is small:

4 Probability of errors in z i : LRP vs MRP

5 Reliability and decoding: LRP Decoding based on the set of Least Reliable Positions: Assume that errors are more likely to occur in the LRPs Select a set E of error patterns e, confined to the LRPs For each e  E, form the modified received vector z+e Decode z+e into a codeword c(e)  C, by use of an efficient algebraic decoder The preceding steps give a list of candidate codewords. The final decoding step is to compare each of these codewords with r, and select the one which is closest in terms of squared Euclidean distance Performance: Depends on |E| Complexity: Depends on |E| and on the algebraic decoder

6 Reliability and decoding: MRP Decoding based on the set of Most Reliable Positions: Assume that errors are less likely to occur in the MRPs Select a set I of k independent MRPs (MRIPs) Select a set E of error patterns e, with 1s confined to the k MRIPs For each e  E, form the modified information vector z k +e and encode it into a codeword c(e)  C The preceding steps give a list of candidate codewords. The final decoding step is to compare each of these codewords with r, and select the one which is closest in terms of squared Euclidean distance Performance and complexity: Depends on |E|

7 Condition on optimality In both of the preceding algorithms, whenever we find a codeword which is good enough, we can terminate the process. What do we mean by good enough? Need an optimality condition

8 Condition on optimality (cont.) D 0 (v) = {i : v i =z i, 0  i<n}, D 1 (v) = {i : v i  z i, 0  i<n} n(v) = | D 1 (v) | = d H (v,z) (r,v) =  i such that r i ·c i < 0 |r i |=  i  D 1 (v) |r i | Want to find the codeword with the lowest correlation discrepancy: (r,v*)   (r,v*) = min v  C, v  v * { (r,v) }  (r,v*) is hard to evaluate, but we can hope to find a lower bound on it Let D 0 (j) (v) consist of the j elements of D 0 (v) with lowest reliability | r j | Let w i be the i-th nonzero weight in the code

9 Condition on optimality (cont.) Thus D 0 (w j -n(v)) (v) consists of the w j -n(v) elements of D 0 (v) with lowest reliability | r j | Theorem: If (r,v)   i  D 0 (w j -n(v)) (v) |r i |, then the ML codeword for r is at a distance less than w j from v. Proof: Assume that v’ is a codeword such that d H (v,v’)  w j (r,v’) =  i  D 1 (v’) |r i |   i  D 0 (v)  D 1 (v’) | r i |   i  D 0 (w j -n(v)) (v) |r i |,...because |D 1 (v’)|  w j – n(v): |D 0 (v)  D 1 (v’)| + |D 1 (v)  D 0 (v’)| = d H (v,v’)  w j |D 1 (v’)|  |D 0 (v)  D 1 (v’)|  w j - |D 1 (v)  D 0 (v’)|  w j - |D 1 (v)|  w j – n(v)

10 Corollary: If (r,v)   i  D 0 (w 1 -n(v)) (v) |r i |, Then v is the ML codeword. If (r,v)   i  D 0 (w 2 -n(v)) (v) |r i |, Then either v is the ML codeword, or the ML codeword is one of the nearest neighbours of v.

11 Optimality condition based on two observed codewords A more sophisticated criterion can be applied if we know more than one codeword ”close” to r Skip the details.

12 Generalized Minimum Distance Decoding Forney (1966) Based on erasure decoding: Consider two codewords c and c’ such that d = d H (c,c’). Assume that c is sent. Assume an error and erasure channel producing t errors and e erasures Then an ML decoder will decode to c if 2t+e  d - 1 GMD decoding considers all possible patterns of  d min - 1 erasures in the d min - 1 LRPs

13 GMD Decoding (on an AWGN) From r, derive the HD word z and the reliability word |r|. Produce a list of  (d min +1)/2  partly erased words by erasing If d min is even: the least reliable, the three least reliable,..., the d min - 1 least reliable positions If d min is odd: no bit; the two least reliable, the four least reliable,..., the d min - 1 least reliable positions For each partly erased word in the list, decode by using (algebraic) error and erasure decoding algorithm Compute SD metric for each decoded word w.r.t r. Select the one closest to r.

14 The Chase algorithms: 1 From r, derive the HD word z and the reliability word |r| Produce a set E of all error patterns of weight exactly  d min /2  For each e  E, decode z+e by using (algebraic) errors- only decoding algorithm Compute SD metric for each decoded word w.r.t r. Select the one closest to r. Better performance, Veeeery high complexity

15 The Chase algorithms: 2 From r, derive the HD word z and the reliability word |r| Produce a set of 2  d min /2  test error patterns E with all possible error patterns confined to the  d min /2  LRps For each e  E, decode z+e by using (algebraic) errors- only decoding algorithm Compute SD metric for each decoded word w.r.t r. Select the one closest to r. Better performance, higher complexity than Chase-3

16 The Chase algorithms: 3 From r, derive the HD word z and the reliability word |r| Produce a list of   (d min +1)/2  modified words by complementing If d min is even: no b it, the least reliable, the three least reliable,..., the d min -1 least reliable positions If d min is odd: no bit; the two least reliable, the four least reliable,..., the d min -1 least reliable positions For each modified word in the list, decode by using (algebraic) errors-only decoding algorithm Compute SD metric for each decoded word w.r.t r. Select the one closest to r.

17 Generalized Chase and GMD decoding Generalizations: How to choose the test error patterns Choose a number a  {1,2,...,  d min /2  } Algorithm A(a) uses error set E(a), consisting of the following 2 a-1 (  (d min +1)/2  -a+1) vectors (even d min ): All 2 a-1 error patterns confined to the a-1 LRPs For each of the preceding error patterns, also complement the next i, i=0,1,3,..., d min -2a+1

18 The Generalized Chase algorithm A(a) From r, derive the HD word z and the reliability word |r| Generate the error patterns in E(a) (in likelihood order?) For each modified word in the list, decode by using (algebraic) errors-only decoding algorithm Compute SD metric for each decoded word w.r.t r. Select the one closest to r A(1) : Chase-3 A(  d min /2  ): Chase-2

19 The Generalized Chase algorithm A e (a) Similar to A(a), but uses error set E e (a), formed by (even d min ): All 2 a-1 error patterns confined to the a-1 LRPs For each of the preceding error patterns, also erase the next i, i=0,1,3,..., d min -2a+1 Decode with errors-and-erasure decoder --- A e (1): Basic GMD decoder The generalized algorithms achieve bounded distance decoding: If the received sequence is within a distance of sqrt(d min ) of the transmitted word, then decoding is correct. Similar to each other in basic properties (performance, complexity)

20 Performance curves: (64,42,8) RM code

21 Performance curves (127,64,21)

22 Suggested exercises ,10.3,10,4,10.5, 10.1