Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding.

Similar presentations


Presentation on theme: "1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding."— Presentation transcript:

1 1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding and iterative decoding But hard to implement ML Soft Decoding for general codes Performance: Somewhere between HD and SD

2 2 Correlation Discrepancy Send binary codeword (v 0,...,v n-1 ) Modulate into bipolar (c 0,...,c n-1 ) Receive real vector (r 0,...,r n-1 ) P(r i |v i ) = K  e –(r i -c i ) 2 /N 0 P(r i |v i =1)/P(r i |v i =0) = K  e –(r i -1) 2 /N 0 / K  e –(r i +1) 2 /N 0 log(P(r i |v i =1)/P(r i |v i =0) )  r i. Receive r. Decode to the codeword c that minimizes  i (r i -c i ) 2 =  i r i 2 + n - 2  i r i ·c i Maximize correlation m(r,v) =  i r i ·c i...=  i |r i | - 2  i such that r i ·c i < 0 |r i | Minimize Correlation discrepancy (r,v) =  i such that r i ·c i < 0 |r i |

3 3 Reliability measures and decoding Consider received vector r For each received symbol r i, form the Hard decision z i Reliability log(P(r i |v i =1)/P(r i |v i =0) )  |r i | As can be expected, z i is more likely to be in error when |r i | is small:

4 4 Probability of errors in z i : LRP vs MRP

5 5 Reliability and decoding: LRP Decoding based on the set of Least Reliable Positions: Assume that errors are more likely to occur in the LRPs Select a set E of error patterns e, confined to the LRPs For each e  E, form the modified received vector z+e Decode z+e into a codeword c(e)  C, by use of an efficient algebraic decoder The preceding steps give a list of candidate codewords. The final decoding step is to compare each of these codewords with r, and select the one which is closest in terms of squared Euclidean distance Performance: Depends on |E| Complexity: Depends on |E| and on the algebraic decoder

6 6 Reliability and decoding: MRP Decoding based on the set of Most Reliable Positions: Assume that errors are less likely to occur in the MRPs Select a set I of k independent MRPs (MRIPs) Select a set E of error patterns e, with 1s confined to the k MRIPs For each e  E, form the modified information vector z k +e and encode it into a codeword c(e)  C The preceding steps give a list of candidate codewords. The final decoding step is to compare each of these codewords with r, and select the one which is closest in terms of squared Euclidean distance Performance and complexity: Depends on |E|

7 7 Condition on optimality In both of the preceding algorithms, whenever we find a codeword which is good enough, we can terminate the process. What do we mean by good enough? Need an optimality condition

8 8 Condition on optimality (cont.) D 0 (v) = {i : v i =z i, 0  i<n}, D 1 (v) = {i : v i  z i, 0  i<n} n(v) = | D 1 (v) | = d H (v,z) (r,v) =  i such that r i ·c i < 0 |r i |=  i  D 1 (v) |r i | Want to find the codeword with the lowest correlation discrepancy: (r,v*)   (r,v*) = min v  C, v  v * { (r,v) }  (r,v*) is hard to evaluate, but we can hope to find a lower bound on it Let D 0 (j) (v) consist of the j elements of D 0 (v) with lowest reliability | r j | Let w i be the i-th nonzero weight in the code

9 9 Condition on optimality (cont.) Thus D 0 (w j -n(v)) (v) consists of the w j -n(v) elements of D 0 (v) with lowest reliability | r j | Theorem: If (r,v)   i  D 0 (w j -n(v)) (v) |r i |, then the ML codeword for r is at a distance less than w j from v. Proof: Assume that v’ is a codeword such that d H (v,v’)  w j (r,v’) =  i  D 1 (v’) |r i |   i  D 0 (v)  D 1 (v’) | r i |   i  D 0 (w j -n(v)) (v) |r i |,...because |D 1 (v’)|  w j – n(v): |D 0 (v)  D 1 (v’)| + |D 1 (v)  D 0 (v’)| = d H (v,v’)  w j |D 1 (v’)|  |D 0 (v)  D 1 (v’)|  w j - |D 1 (v)  D 0 (v’)|  w j - |D 1 (v)|  w j – n(v)

10 10 Corollary: If (r,v)   i  D 0 (w 1 -n(v)) (v) |r i |, Then v is the ML codeword. If (r,v)   i  D 0 (w 2 -n(v)) (v) |r i |, Then either v is the ML codeword, or the ML codeword is one of the nearest neighbours of v.

11 11 Optimality condition based on two observed codewords A more sophisticated criterion can be applied if we know more than one codeword ”close” to r Skip the details.

12 12 Generalized Minimum Distance Decoding Forney (1966) Based on erasure decoding: Consider two codewords c and c’ such that d = d H (c,c’). Assume that c is sent. Assume an error and erasure channel producing t errors and e erasures Then an ML decoder will decode to c if 2t+e  d - 1 GMD decoding considers all possible patterns of  d min - 1 erasures in the d min - 1 LRPs

13 13 GMD Decoding (on an AWGN) From r, derive the HD word z and the reliability word |r|. Produce a list of  (d min +1)/2  partly erased words by erasing If d min is even: the least reliable, the three least reliable,..., the d min - 1 least reliable positions If d min is odd: no bit; the two least reliable, the four least reliable,..., the d min - 1 least reliable positions For each partly erased word in the list, decode by using (algebraic) error and erasure decoding algorithm Compute SD metric for each decoded word w.r.t r. Select the one closest to r.

14 14 The Chase algorithms: 1 From r, derive the HD word z and the reliability word |r| Produce a set E of all error patterns of weight exactly  d min /2  For each e  E, decode z+e by using (algebraic) errors- only decoding algorithm Compute SD metric for each decoded word w.r.t r. Select the one closest to r. Better performance, Veeeery high complexity

15 15 The Chase algorithms: 2 From r, derive the HD word z and the reliability word |r| Produce a set of 2  d min /2  test error patterns E with all possible error patterns confined to the  d min /2  LRps For each e  E, decode z+e by using (algebraic) errors- only decoding algorithm Compute SD metric for each decoded word w.r.t r. Select the one closest to r. Better performance, higher complexity than Chase-3

16 16 The Chase algorithms: 3 From r, derive the HD word z and the reliability word |r| Produce a list of   (d min +1)/2  modified words by complementing If d min is even: no b it, the least reliable, the three least reliable,..., the d min -1 least reliable positions If d min is odd: no bit; the two least reliable, the four least reliable,..., the d min -1 least reliable positions For each modified word in the list, decode by using (algebraic) errors-only decoding algorithm Compute SD metric for each decoded word w.r.t r. Select the one closest to r.

17 17 Generalized Chase and GMD decoding Generalizations: How to choose the test error patterns Choose a number a  {1,2,...,  d min /2  } Algorithm A(a) uses error set E(a), consisting of the following 2 a-1 (  (d min +1)/2  -a+1) vectors (even d min ): All 2 a-1 error patterns confined to the a-1 LRPs For each of the preceding error patterns, also complement the next i, i=0,1,3,..., d min -2a+1

18 18 The Generalized Chase algorithm A(a) From r, derive the HD word z and the reliability word |r| Generate the error patterns in E(a) (in likelihood order?) For each modified word in the list, decode by using (algebraic) errors-only decoding algorithm Compute SD metric for each decoded word w.r.t r. Select the one closest to r. ----- A(1) : Chase-3 A(  d min /2  ): Chase-2

19 19 The Generalized Chase algorithm A e (a) Similar to A(a), but uses error set E e (a), formed by (even d min ): All 2 a-1 error patterns confined to the a-1 LRPs For each of the preceding error patterns, also erase the next i, i=0,1,3,..., d min -2a+1 Decode with errors-and-erasure decoder --- A e (1): Basic GMD decoder The generalized algorithms achieve bounded distance decoding: If the received sequence is within a distance of sqrt(d min ) of the transmitted word, then decoding is correct. Similar to each other in basic properties (performance, complexity)

20 20 Performance curves: (64,42,8) RM code

21 21 Performance curves (127,64,21)

22 22 Suggested exercises... 10.2,10.3,10,4,10.5, 10.1


Download ppt "1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding."

Similar presentations


Ads by Google