Download presentation
Presentation is loading. Please wait.
1
Log-Likelihood Algebra
Sum of two LLRs L(d1) + L(d2) L (d1 d2 ) = log exp[L(d1)] + exp [L(d2)] 1 + exp[L(d1)].exp [L(d2)] (-1) . sgn [L(d1)]. sgn [L(d2)] . Min ( |L(d1)| , |L(d2)| ) + L (d) = - L (d) 0 = 0
2
Iterative decoding example
2D single parity code di di = pij d1 = 1 d2 = 0 p12 = 1 d3 = 0 d4 = 1 d34 = 1 p13 = 1 p24 = 1 - x1 = 0.75 x2 = 0.05 x12 = 1.25 x3 = 0.10 x4 = 0.15 x34 = 1.0 x13 = 3.0 x24 = 0.5 -
3
Iterative decoding example
Estimate Lc(xk) = 2 xk /2 assuming 2 = 1 Lc(x1 )= 1.5 Lc(x2 )= 0.1 Lc(x12 )= 2.5 Lc(x3 )= 0.2 Lc(x4 )= 0.3 Lc(x34 )= 2.0 Lc(x13 )= 6.0 Lc(x24 )= 1.0 -
4
Iterative decoding example
Compute Le( dj ) = [Lc ( x j) L(dj ) ] Lc ( x ij) + Leh( d1) = [Lc ( x 2) L(d2) ] Lc ( x 12 ) = new L( d1 ) + Leh( d2) = [Lc ( x 1) L(d1) ] Lc ( x 12 ) = new L(d2) Leh( d3) = [Lc ( x 4) L(d4) ] Lc ( x 34 ) = new L(d3) Leh( d4) = [Lc ( x 3) L(d3) ] Lc ( x 34 ) = new L(d4)
5
Iterative decoding example
Lev( d1) = [Lc ( x 3) L(d3) ] Lc ( x 13 ) = new L( d1 ) + Lev( d2) = [Lc ( x 4) L(d4) ] Lc ( x 24 ) = new L(d2) Lev( d3) = [Lc ( x 1) L(d1) ] Lc ( x 13 ) = new L(d3) Lev( d4) = [Lc ( x 2) L(d2) ] Lc ( x 24 ) = new L(d4) After many iterations the LLR is computed for decision making L( di ) = Lc ( x i) Leh(di) Lev( dj ) ^
6
First Pass output Lc(x1 )= 1.5 Lc(x2 )= 0.1 Lc(x3 )= 0.2 Lc(x4 )= 0.3
Leh(d1 )= -0.1 Leh(d2 )= -1.5 Leh(d3 )= -0.3 Leh(d4 )= -0.2 Lev(d1 )= 0.1 Lev(d2 )= -0.1 Lev(d3 )= -1.4 Lev(d4 )= 1.0 L(d1 )= 1.5 L(d2 )= -1.5 L(d3 )= -1.5 L(d4 )= 1.1
7
Parallel Concatenation Codes
Component codes are Convolutional codes Recursive Systematic Codes Should have maximum effective free distance Large Eb/No maximizing minimum weight codewords Small Eb/No optimizing weight distribution of the codewords Interleaving to avoid low weight codewords
8
Non - Systematic Codes - NSC
dk dk-1 dk-2 + {dk} {uk} {vk} uk = g1i dk-i (mod 2) ; i=1 L-1 G1 = [ ] G2 = [ ] vk = g2i dk-i (mod 2) ; i=1 L-1
9
Recursive Systematic Codes - RSC
ak ak-1 ak-2 + {dk} {uk} {vk} ak = dk + gi’ ak-i (mod 2) ; gi’ = g1i if uk = dk i= g2i if vk = dk L-1
10
Trellis for NSC & RSC NSC RSC a = 00 a = 00 b = 01 b = 01 c = 10
11 11 11 11 b = 01 b = 01 00 00 10 10 c = 10 c = 10 01 01 01 01 d = 11 d = 11 10 10
11
Concatenation of RSC Codes
{dk} {uk} ak ak-1 ak-2 + {v1k} Interleaver {vk} ak ak-1 ak-2 + {v2k} { 0 0 … …..0 0 } { 0 0 … … 0 0 } produce low weight codewords in component coders
12
Feedback Decoder APP Joint Probability k i,m = P { dk = i, Sk = m / R1 N } State at time k Received sequence From time 1 to N APP P { dk = i / R1 N } = k i,m ; i = 0,1 for binary m Likelihood Ratio ( dk ) = k 1,m m k 0,m Log Likelihood Ratio L( dk ) = Log k 1,m m k 0,m
13
Feedback Decoder MAP Rule dk =1 ; L(dk) > 0 dk =0 ; L(dk) < 0
MAP Rule dk =1 ; L(dk) > 0 dk =0 ; L(dk) < 0 ^ ^ L( dk) = Lc ( x k) L(dk) Le( dk ) L1( dk ) = [Lc ( x k) Le1(dk ) ] L2( dk ) = [ f{L1 ( dn) }n k Le2(dk ) ]
14
Feedback Decoder yk xk dk Le2( dk ) L2( dk ) y1k y2k DECODER 1
Interleaver DECODER 2 De-Interleaver yk xk dk L1( dk ) L1( dn ) Le2( dk ) L2( dk ) y1k y2k
15
Modified MAP Vs. SOVA SOVA Modified MAP
Viterbi Algorithm acting on soft inputs over forward path of the trellis for a block of bits Add BM to SM compare select ML path Modified MAP Viterbi Algorithm acting on soft inputs over forward and reverse paths of the trellis for a block of bits Multiply BM & SM Sum in both directions best overall statistic
16
MAP Decoding Example a = 00 b = 10 c = 01 d = 11 dk dk-1 dk-2 + {dk}
{uk} {vk} 00 a = 00 11 b = 10 00 11 01 c = 01 10 01 d = 11 10
17
MAP Decoding Example d = { 1, 0, 0 }
u = { 1, 0, 0 } x = { 1., 0.5, -0.6 } v = { 1, 0, 1 } y = { 0.8, 0.2, 1.2 } Apriori probabilities 1 = 0 = 0.5 Branch Metric k i,m = P { dk = i, Sk = m , Rk } = P { Rk / dk = i, Sk = m } . P {Sk = m / dk = i } . P { dk = i } P {Sk = m / dk = i } = 1 / 2 L = ¼ ; P { dk = i } = 1 / 2 ; k i,m = P { xk / dk = i, Sk = m } . P { yk / dk = i, Sk = m } . { ki / 2L }
18
MAP Decoding Example k i,m = 0.5 exp { xk . uki + yk . vki,m }
P { xk / dk = i, Sk = m } . P { yk / dk = i, Sk = m } . { ki / 2L } For AWGN channel : k i,m = { ki / 2L } (1/2 ) exp { - (xk – uki )2 /(2 2 ) }dxk . (1/2 ) exp { - (yk – vki,m )2 /(2 2 ) }dyk k i,m = { Ak ki } exp { (xk . uki )+ (yk . Vki,m )/ 2 } Assuming Ak = 1 2 =1 ; k i,m = 0.5 exp { xk . uki + yk . vki,m }
19
Subsequent steps Calculate branch metric
k i,m = 0.5 exp { xk . uki + yk . vki,m } Calculate forward state metric k+1m = k i,b(j,m) kb(j,m) J=0 1 Calculate reverse state metric 1 km = kj,m k+1f(j,m) J=0
20
Subsequent steps Calculate LLR for all times
km k1,m k+1f(1,m) m Log Likelihood Ratio L( dk ) = Log km k0,m k+1f(0,m) m Hard decision based on LLR
21
Iterative decoding steps
Likelihood Ratio ( dk ) = { k1} km exp { xk . uk1 + yk . Vk1,m } k+1f(1,m) m { k0} km exp { xk . uk0 + yk . Vk0,m } k+1f(0,m) km exp { yk . Vk1,m } k+1f(1,m) m km exp { yk . Vk0,m } k+1f(0,m) { k} exp { 2xk } { k} exp { 2xk } { k e} LLR L( dk ) = L(dk) + { 2xk } + Log [ke ]
22
Iterative decoding For the second iteration;
k i,m = ke i exp { xk . uki + yk . Vki,m } Calculate LLR for all times km k1,m k+1f(1,m) m Log Likelihood Ratio L( dk ) = Log km k0,m k+1f(0,m) m Hard decision based on LLR after multiple iterations
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.