Presentation is loading. Please wait.

Presentation is loading. Please wait.

Iterative decoding If the output of the outer decoder were reapplied to the inner decoder it would detect that some errors remained, since the columns.

Similar presentations


Presentation on theme: "Iterative decoding If the output of the outer decoder were reapplied to the inner decoder it would detect that some errors remained, since the columns."— Presentation transcript:

1 Iterative decoding If the output of the outer decoder were reapplied to the inner decoder it would detect that some errors remained, since the columns would not be codewords of the inner code Iterative decoder: to reapply the decoded word not just to the inner code, but also to the outer, and repeat as many times as necessary. However, it is clear that this would be in danger of simply generating further errors. One further ingredient is required for the iterative decoder.

2 Soft-In, Soft-Out (SISO) decoding
The performance of a decoder is significantly enhanced if, in addition to the ‘hard decision’ made by the demodulator on the current symbol, some additional ‘soft information’ on the reliability of that decision is passed to the decoder. For example, if the received signal is close to a decision threshold (say between 0 and 1) in the demodulator, then that decision has low reliability, and the decoder should be able to change it when searching for the most probable codeword. Making use of this information in a conventional decoder, called soft decision decoding, leads to a performance improvement of around 2dB in most cases.

3 SISO decoder A component decoder that generates ‘soft information’ as well as makes use of it, Soft information usually takes the form of a log-likelihood ratio for each data bit, The likelihood ratio is the ratio of the probability that a given bit is ‘1’ to the probability that it is ‘0’ If we take the logarithm of this, then its sign corresponds to the most probable hard decision on the bit (if it is positive, ‘1’ is most likely; if negative, then ‘0’) The absolute magnitude is a measure of our certainty about this decision.

4 Likelihood Functions Bayes’ Theorem 
P(d=i/x) = p(x/d=i) P(d=i) ; i = 1,……M p(x) P(d=i/x)  A posteriori probability APP P(d=i)  A priori probability p(x/d=i)  conditional pdf of received Signal x p(x)  pdf of received Signal x

5 Maximum Likelihood Let dk= +1, -1 ; AWGN channel
Received statistic  xk Likelihood functions  l1 = p(xk / dk= +1 ) l2= p(xk / dk= -1 ) Maximum Likelihood  hard decision rule choose dk= +1, if l1 > l2 choose dk= -1, if l2 > l1

6 Maximum A Posteriori - MAP
Let dk= +1, -1 ; AWGN channel Received statistic  xk MAP Rule  P(dk= +1 / xk ) P(dk= -1 / xk ) H1 : dk= +1, H2 : dk= -1. > < H1 H2

7 MAP Likelihood ratio test
> < H1 H2 p(xk / dk= +1 ) P(dk= +1) p(xk / dk= -1 ) P(dk= -1) > < H1 H2 p(xk / dk= +1 ) P(dk= -1) p(xk / dk= -1 ) P(dk= +1) > < H1 H2 p(xk / dk= +1 ) P(dk= +1) p(xk / dk= -1 ) P(dk= -1)

8 Log - Likelihood Ratio : LLR
P(d= +1 / x) P(d= -1 / x ) p(x / d= +1 ) P(d= +1) p(x / d= -1 ) P(d= -1) L(d/ x ) = log = log = log log = L ( x/d ) L(d) p(x / d= +1 ) p(x / d= -1 ) P(d= +1) P(d= -1)

9 Log - Likelihood Ratio : LLR
L(d/ x ) = L ( x /d ) L(d) L’( d ) = Lc ( x ) L(d) ^ Soft LLR output for a systematic code : L( d ) = L’( d ) Le( d ) LLR of data at Demod. output Extrinsic LLR : Knowledge from Decoding process L( d ) = Lc ( x ) L(d) Le( d )

10 L( d ) = Lc ( x ) + L(d) + Le( d )
^ L’( d ) a posteriori value out ^ SISO Decoder L(d) apriori value in Le( d ) Extrinsic Value out Lc( x ) Channel Value in L’( d ) = Lc ( x ) + L(d) Detector a posteriori LLR value L( d ) = L’( d ) + Output LLR value

11 Iterative decoding algorithm for the product code
Set the a-priori LLR L(d) = 0 Decode horizontally and obtain Leh( d ) = L( d ) - Lc ( x ) - L( d ) Set L(d) = Leh(d) for vertical decoding Decode vertically and obtain Lev( d ) = L( d ) - Lc ( x ) - L( d ) Set L(d) = Lev(d) for horizontal decoding Repeat steps 2 to 5 to optimize and the soft output is L( d ) = Lc ( x ) + Leh( d ) + Lev( d ) ^ ^ ^ ^ ^ ^ ^ ^ ^

12 Iterative Decoder

13 Decoder Architectures
Decoders must operate much faster than the rate at which incoming data arrives, so that several iterations can be accommodated in the time between the arrivals of received data blocks, Architecture may be replaced by a pipeline structure, in which data and extrinsic information are passed to a new set of decoders while the first one processes the next data block At some point the decoder may be deemed to have converged to the optimum decoded word, at which point the combination of extrinsic and intrinsic information can be used to find the decoded data Usually a fixed number of iterations is used—between 4 and 10, depending on the type of code and its length—but it is also possible to detect convergence and terminate the iterations at that point.


Download ppt "Iterative decoding If the output of the outer decoder were reapplied to the inner decoder it would detect that some errors remained, since the columns."

Similar presentations


Ads by Google