Download presentation
Presentation is loading. Please wait.
1
Division of Engineering and Applied Sciences March 2004 Belief-Propagation with Information Correction: Near Maximum-Likelihood Decoding of LDPC Codes Ned Varnica +, Marc Fossorier #, Alek Kavčić + + Division of Engineering and Applied Sciences Harvard University # Department of Electrical Engineering University of Hawaii
2
Division of Engineering and Applied Sciences slide 2 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Outline Motivation – BP vs ML decoding Improved iterative decoder of LDPC codes Types of BP decoding errors Simulation results
3
Division of Engineering and Applied Sciences slide 3 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes LDPC Code Graph Bipartite Tanner code graph G = (V,E,C) –Variable (symbol) nodes v i V, i = 0, 1, …, N-1 –Parity check nodes c j C, j = 0, 1, …, N c -1 Code rate –R = k/N, k N-N c Belief Propagation –Iterative propagation of conditional probabilities... Parity check matrix H A non-zero entry in H an edge in G N c x N
4
Division of Engineering and Applied Sciences slide 4 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Standard Belief-Propagation on LDPC Codes Locally operating –optimal for cycle-free graphs Optimized LDPC codes (Luby et al 98, Richardson, Shokrollahi & Urbanke 99, Hou, Siegel & Milstein 01, Varnica & Kavcic 02) –sub-optimal for graphs with cycles Good finite LDPC have an exponential number of cycles in their Tanner graphs (Etzion, Trachtenberg and Vardy 99) Encoder constructions BP to ML performance gap due to convergence to pseudo- codewords (Wiberg 95, Forney et al 01, Koetter & Vontobel 03)
5
Division of Engineering and Applied Sciences slide 5 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Examples Short Codes - e.g. Tanner code with N = 155, k = 64, diam = 6, girth = 8, d min = 20 Long Codes - e.g. Margulis Code with N = 2640 k = 1320
6
Division of Engineering and Applied Sciences slide 6 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Goals Construct decoder –Improved BP decoding performance –More flexibility in performance versus complexity –Can nearly achieve ML performance with much lower computational burden Reduce or eliminate LDPC error floors Applications –Can use with any “off-the-shelf” LDPC encoder –Can apply to any communication/data storage channel
7
Division of Engineering and Applied Sciences slide 7 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes ^ Subgraph Definitions Definition 1: Definition 1: SUC graph G S (L) = ( V S (L), E S (L), C S (L) ) is graph induced by SUC C S (L) Syndrome s = H x (L) Set of unsatisfied check nodes(SUC)C S (L) - Set of unsatisfied check nodes (SUC) C S (L) = {c i : (Hx (L) ) i 0} V S (L) - Set of variable nodes incident to c C S (L) E S (L) - Set of edges connecting V S (L) and C S (L) channel x {0,1} N r RNr RN transmitted binary vector received vector d Gs (v) - Degree in SUC graph G S (L) for v V d Gs (v) d G (v) ^ ^ BP decoder x (L) {0,1} N decoded vector after L iterations BCJR detector
8
Division of Engineering and Applied Sciences slide 8 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Properties of SUC graph Observation 1 Observation 1: The higher the degree d Gs (v) of a node v V s (L) the more likely is v to be in error d Gs = 0 d Gs = 1 d Gs = 2 d Gs = 3 Channel information LLR ( log(p true /p false ) ) 2.82.31.51.1 LLR messages received from check nodes 3.61.60.2- 0.7 Percentage of variable nodes in error 8.19.331.251.1 e.g. Statistics for Tanner (155,64) code blocks for which BP failed on AWGN channel at SNR = 2.5 dB Select v node Perform information correction
9
Division of Engineering and Applied Sciences slide 9 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Node Selection Strategy 1 Strategy 1 Strategy 1: Determine SUC graph and select the node with maximal degree d Gs in SUC graph G S (L) Select node v 0 or v 2 or v 12
10
Division of Engineering and Applied Sciences slide 10 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Properties of SUC graph, cntd Observation 2:neighbors (wrt to SUC graph) Observation 2: The smaller the number of neighbors (wrt to SUC graph) with high degree, the more likely v is to be in error Definition 2:neighbors with respect to SUC Definition 2: Nodes v 1 and v 2 are neighbors with respect to SUC if there exist c C S (L) incident to both v 1 and v 2 n v (m) - number of neighbors of v with degree d Gs = m... n v (2) = 1 and n v (1) = 4 C S (L)
11
Division of Engineering and Applied Sciences slide 11 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Node Selection Strategy 2 Strategy 2: Strategy 2: Among nodes with maximal degree d Gs select a node with minimal number of highest degree neighbors Select node v 0 n v0 (2) = n v12 (2) = 1; n v2 (2) = 2 n v0 (1) = 4; n v12 (1) = 6
12
Division of Engineering and Applied Sciences slide 12 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Strategy 2 Alternatives to Strategy 2 d Gs = max d Gs (v) Set of suspicious nodesSet of suspicious nodes S v = {v : d Gs (v) = d Gs } Edge penalty function r(v,c) = (N c - set of v nodes incident to c) Penalty function R(v) = r(v,c) – r(v,c) Select v p S v as v p = argmin R(v) Numerous related approaches possible max v V v n N c \{v} v S v max c C s c C s max d Gs (v n ); if N c \ { v } 0 ; if N c \ { v } =
13
Division of Engineering and Applied Sciences slide 13 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Node Selection Strategy 3 Observation 3: Observation 3: A variable node v is more likely to be incorrect if its decoder input is less reliable, i.e., if | O(v) | is lower Strategy 3: Strategy 3: Among nodes with maximal degree d Gs select node with minimal input reliability | O(v) | Decoder input on node v i Memoryless AWGN channel:
14
Division of Engineering and Applied Sciences slide 14 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Message Passing - Notation Set of log-likelihood ratios messages on v nodes: M = (C,O) Decoder input : O = [ O (v 0 ), …, O (v N-1 )] Channel detector (BCJR) input B = [ B (v 0 ), …, B (v N-1 )]... C V O T T T
15
Division of Engineering and Applied Sciences slide 15 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Symbol Correction Procedures Replace decoder and detector input LLRs corresponding to selected v p 1.O (v p ) = +S and B (v p ) = +S 2.O (v p ) = –S and B (v p ) = –S Perform correction in stages Test 2 j combinations at stage j For each test perform additional K j iterations Max number of attempts (stages) j max start j = 1j = 2j = 3 7 8 3 1 9 10 4 11 12 5 2 13 14 6
16
Division of Engineering and Applied Sciences slide 16 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Symbol Correction Procedures “codeword listing” approach –Test all 2 j max possibilities –W – collection of valid codeword candidates –Pick the most likely candidate e.g. for AWGN channel set x = argmin d(r,w) “first codeword” approach –Stop at a first valid codeword –Faster convergence, slightly worse performance for large j max w W start j = 1j = 2j = 3 7 8 3 1 9 10 4 11 12 5 2 13 14 6 ^
17
Division of Engineering and Applied Sciences slide 17 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Parallel and Serial Implementation ( j max = 3 ) start j = 1 j = 2j = 3 3 4 2 1 6 7 5 10 11 9 8 13 14 12 start j = 1j = 2j = 3 7 8 3 1 9 10 4 11 12 5 2 13 14 6
18
Division of Engineering and Applied Sciences slide 18 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Complexity - Parallel Implementation start j = 1j = 2j = 3 7 8 3 1 9 10 4 11 12 5 2 13 14 6 Decoding continued –M need to be stored –storage (2 j max ) –lower K j required –“first codeword” procedure - fastest convergence Decoding restarted –M need not be stored –higher K j required
19
Division of Engineering and Applied Sciences slide 19 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Can we achieve ML? Fact 1:“codeword listing” algorithm Fact 1: As j max N, “codeword listing” algorithm with K j = 0, for j < j max, and K jmax = 1 becomes ML decoder For low values of j max (j max << N) performs very close to ML decoder –Tanner (N = 155, k = 64) code –j max = 11, K j = 10 –Decoding continued –faster decoding –M need to be stored –ML almost achieved
20
Division of Engineering and Applied Sciences slide 20 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Pseudo-codewords Elimination Pseudo-codewords compete with codewords in locally-operating BP decoding (Koetter & Vontobel 2003) c - a codeword in an m-cover of G i - fraction of time v i V assumes incorrect value in c = ( 0, 1, …, N-1 ) - pseudo-codeword pseudo-distance (for AWGN) Eliminate a large number of pseudo-codewords by forcing symbol ‘0’ or symbol ‘1’ on nodes v p –Pseudo-distance spectra improved –Can increase min pseudo-distance if j max is large enough
21
Division of Engineering and Applied Sciences slide 21 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Types of BP decoding errors 1.Very high SNRs (error floor region) Stable errors on saturated subgraphs: decoder reaches a steady state and fails messages passed in SUC graph saturated 2. Medium SNRs (waterfall region) Unstable Errors: decoder does not reach a steady state Definition 3: Definition 3: Decoder D has reached a steady state in the interval [L 1,L 2 ] if C s (L) = C s (L 1 ) for all L [L 1,L 2 ]
22
Division of Engineering and Applied Sciences slide 22 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes SUC Properties in Error Floor Region Corollary: Corollary: For regular LDPC codes with Theorem 1: Theorem 1: In the error floor region Information correction for high SNRs (error floor region) –Pros: –Small size SUC –Faster convergence –Cons: –d Gs plays no role in node selection
23
Division of Engineering and Applied Sciences slide 23 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Simulation Results Tanner (155,64) code –Regular (3,5) code –Channel: AWGN –Strategy 3 –j max = 11, K j = 10 –More than 1dB gain –ML almost achieved
24
Division of Engineering and Applied Sciences slide 24 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Simulation Results Tanner (155,64) code –Regular (3,5) code –Channel: AWGN –Strategy 3 –“First codeword” procedure –j max = 4,6,8 and 11 –K j = 10
25
Division of Engineering and Applied Sciences slide 25 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Simulation Results – Error Floors Margulis (2640,1320) code –Regular (3,6) code –Channel: AWGN –Strategy 3 –“First codeword” procedure –j max = 5, K j = 20 –More than 2 orders of magnitudes WER improvement
26
Division of Engineering and Applied Sciences slide 26 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Simulation Results – ISI Channels – Tanner (155,64) code – Channels: –Dicode (1-D) –EPR4 (1-D)(1+D) 2 –Strategy 2 – j max = 11, K j = 20 – 1dB gain – 20 % of detected errors are ML
27
Division of Engineering and Applied Sciences slide 27 Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes Conclusion Information correction in BP decoding of LDPC codes –More flexibility in performance vs complexity –Can nearly achieve ML performance with much lower computational burden –Eliminates a large number of pseudo-codewords Reduces or eliminates LDPC error floors Applications –Can use for any “off-the-shelf” LDPC encoder –Can apply to any communication/data storage channel
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.