Iterative Equalization

Slides:



Advertisements
Similar presentations
Noise-Predictive Turbo Equalization for Partial Response Channels Sharon Aviran, Paul H. Siegel and Jack K. Wolf Department of Electrical and Computer.
Advertisements

Iterative Equalization and Decoding
Feedback Reliability Calculation for an Iterative Block Decision Feedback Equalizer (IB-DFE) Gillian Huang, Andrew Nix and Simon Armour Centre for Communications.
S Digital Communication Systems Bandpass modulation II.
The Impact of Channel Estimation Errors on Space-Time Block Codes Presentation for Virginia Tech Symposium on Wireless Personal Communications M. C. Valenti.
Maximum Likelihood Sequence Detection (MLSD) and the Viterbi Algorithm
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Diversity techniques for flat fading channels BER vs. SNR in a flat fading channel Different kinds of diversity techniques Selection diversity performance.
Hidden Markov Models Theory By Johan Walters (SR 2003)
TELIN Estimation and detection from coded signals Presented by Marc Moeneclaey, UGent - TELIN dept. Joint research : - UGent.
TELIN Estimation and detection from coded signals Presented by Marc Moeneclaey, UGent - TELIN dept. Joint research : - UGent.
Multiple-input multiple-output (MIMO) communication systems
Division of Engineering and Applied Sciences DIMACS-04 Iterative Timing Recovery Aleksandar Kavčić Division of Engineering and Applied Sciences Harvard.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
EE 3220: Digital Communication Dr Hassan Yousif 1 Dr. Hassan Yousif Ahmed Department of Electrical Engineering College of Engineering at Wadi Aldwasser.
Compression with Side Information using Turbo Codes Anne Aaron and Bernd Girod Information Systems Laboratory Stanford University Data Compression Conference.
Dept. of EE, NDHU 1 Chapter Three Baseband Demodulation/Detection.
ECED 4504 Digital Transmission Theory
Equalization in a wideband TDMA system
S Advanced Digital Communication (4 cr)
1 Channel Coding (II) Cyclic Codes and Convolutional Codes.
Multiuser Detection (MUD) Combined with array signal processing in current wireless communication environments Wed. 박사 3학기 구 정 회.
III. Turbo Codes.
CODED COOPERATIVE TRANSMISSION FOR WIRELESS COMMUNICATIONS Prof. Jinhong Yuan 原进宏 School of Electrical Engineering and Telecommunications University of.
Baseband Demodulation/Detection
Iterative Soft Decoding of Reed-Solomon Convolutional Concatenated Codes Li Chen Associate Professor School of Information Science and Technology, Sun.
Soft-in/ Soft-out Noncoherent Sequence Detection for Bluetooth: Capacity, Error Rate and Throughput Analysis Rohit Iyer Seshadri and Matthew C. Valenti.
A Novel technique for Improving the Performance of Turbo Codes using Orthogonal signalling, Repetition and Puncturing by Narushan Pillay Supervisor: Prof.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 12.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Coding Theory. 2 Communication System Channel encoder Source encoder Modulator Demodulator Channel Voice Image Data CRC encoder Interleaver Deinterleaver.
Iterative decoding If the output of the outer decoder were reapplied to the inner decoder it would detect that some errors remained, since the columns.
Name Iterative Source- and Channel Decoding Speaker: Inga Trusova Advisor: Joachim Hagenauer.
ITERATIVE CHANNEL ESTIMATION AND DECODING OF TURBO/CONVOLUTIONALLY CODED STBC-OFDM SYSTEMS Hakan Doğan 1, Hakan Ali Çırpan 1, Erdal Panayırcı 2 1 Istanbul.
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
EE 3220: Digital Communication
Digital Communications Chapeter 3. Baseband Demodulation/Detection Signal Processing Lab.
Synchronization of Turbo Codes Based on Online Statistics
VIRGINIA POLYTECHNIC INSTITUTE & STATE UNIVERSITY MOBILE & PORTABLE RADIO RESEARCH GROUP MPRG Combined Multiuser Detection and Channel Decoding with Receiver.
Real-Time Turbo Decoder Nasir Ahmed Mani Vaya Elec 434 Rice University.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Last time, we talked about:
Space Time Codes. 2 Attenuation in Wireless Channels Path loss: Signals attenuate due to distance Shadowing loss : absorption of radio waves by scattering.
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
Iterative detection and decoding to approach MIMO capacity Jun Won Choi.
Baseband Receiver Receiver Design: Demodulation Matched Filter Correlator Receiver Detection Max. Likelihood Detector Probability of Error.
VIRGINIA POLYTECHNIC INSTITUTE & STATE UNIVERSITY MOBILE & PORTABLE RADIO RESEARCH GROUP MPRG Iterative Multiuser Detection for Convolutionally Coded Asynchronous.
Matthew Valenti West Virginia University
Log-Likelihood Algebra
Frank Schreckenbach, Munich University of Technology NEWCOM 2005 Analysis and Design of Mappings for Iterative Decoding of Bit-Interleaved Coded Modulation*
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3c: Signal Detection in AWGN.
1 Channel Coding: Part III (Turbo Codes) Presented by: Nguyen Van Han ( ) Wireless and Mobile Communication System Lab.
DIGITAL COMMUNICATION. Introduction In a data communication system, the output of the data source is transmitted from one point to another. The rate of.
10/19/20051 Turbo-NFSK: Iterative Estimation, Noncoherent Demodulation, and Decoding for Fast Fading Channels Shi Cheng and Matthew C. Valenti West Virginia.
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
Lecture 1.31 Criteria for optimal reception of radio signals.
The Viterbi Decoding Algorithm
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
Advanced Wireless Networks
Recent Advances in Iterative Parameter Estimation
Equalization in a wideband TDMA system
MAP decoding: The BCJR algorithm
Coding for Noncoherent M-ary Modulation
Shi Cheng and Matthew C. Valenti Lane Dept. of CSEE
Coding and Interleaving
S Digital Communication Systems
Equalization in a wideband TDMA system
On the Design of RAKE Receivers with Non-uniform Tap Spacing
Error Correction Coding
IV. Convolutional Codes
Presentation transcript:

Iterative Equalization âk Decoder s(bk) s‘(bk) Deinterleaver Interleaver Speaker: Michael Meyer michi-meyer@gmx.de s(ck) s‘(ck) Demapper Mapper Equalizer / Detector yk

System Configuration and Receiver Structures ak âk âk âk Encoder Decoder Decoder bk s(bk) s(bk) Interleaver Deinterleaver s‘(bk) Deinterleaver Interleaver ck s(ck) s(ck) Mapper Demapper s‘(ck) xk s(xk) Optimal Detector Demapper Mapper Channel Equalizer / Detector Equalizer / Detector yk yk yk yk System Configuration Receiver A: optimal detector Receiver B: one-time equalization and detection Receiver C: turbo equalization

Interleaver Example for an interleaver: ak Encoder bk Interleaver ck Mapper xk Example for an interleaver: A 3-random interleaver for 18 code bits Channel yk

Equalization e.g. Multi-Path Propagation might lead to Intersignal Methods to compensate the channel effects Transmitter Receiver e.g. Multi-Path Propagation might lead to Intersignal Interference (ISI).

Used Channel Modell In the following, we will use an AWGN channel with known channel impulse response (CIR). The received signal is given by t channel coefficient sent signal noise In matrix form: y = Hx + n As an example, we have a length-three channel with h0=0.407 h1=0.815 h2=0.407 The noise is Gaussian:

The Forward / Backward Algorithm For Receiver B, the Forward / Backward Algorithm is often used for equalization and decoding. As this algorithm is a basic building block for our turbo equilization setup, we will discuss it in detail for equalization for decoding We will continue our example to make things clear. The example uses binary phase shift keying (BPSK)

The decision rule for the equalizer is âk Decoder Deinterleaver with the log-likelihood ratio Demapper s(xk) Equalizer / Detector yk So, we have to calculate L(c|y) Receiver B:

A branch of the trellis is a four-tuple (i, j, xi,j, vi,j) The Trellis Diagram (1) (-1, -1) (-1, -1) (1, -1) (1, -1) (-1, 1) (-1, 1) (1, 1) (1, 1) state rj state ri input xk=xi,j output vk=vi,j time k+2 A branch of the trellis is a four-tuple (i, j, xi,j, vi,j)

The Trellis Diagram (2) t 2L = 4 If the tapped delay line contains L elements and if we use a binary alphabet {+1, -1}, the channel can be in one of 2L states ri. The set of possible states is S = {r0,r1,…,r2L-1} At each time instance k=1,2,…,N the state of the channel is a random variable sk  S.

The Trellis Diagram (3) Using a binary alphabet, a given state sk = ri can only develop into two different states sk+1 = rj depending on the input symbol xk = xi,j = {+1, -1}. The output symbol vk = vi,j in the noise-free case is easily calculated by v2,0 = h0x2,0 + h1x3,2 + h2x3,3 = 0.407∙1 + 0.815∙1 + 0.407∙(-1) = 0.815

The Trellis Diagram (4) xi,j and vi,j are uniquely identified by the index pair (i j). The set of all index pairs (i j) corresponding to valid branches is denoted B. e.g B = {(00), (01), (12), (13), (20), (21), (33), (32)}

The Joint Distribution p(sk, sk+1, y) As we are separating the equalization from the decoding task, we assume that the random variables xk are statistically independent (IID), hence We then have to calculate P(sk=ri, sk+1=rj | y). This is the probability that the transmitted sequence path in the trellis contains the branch (I, j, xi,j, vi,j) at the time instance k. This APP (a posteriori probability) can be computed efficiently with the forward / backward algorithm, based on a suitable decomposition of the joint distribution p(sk, sk+1, y) = p(y) ∙ P(sk, sk+1 | y)

… … The Decomposition yk so s1 sk sk+1 sN-1 sN We can write the joint distribution as p(sk, sk+1, y) = p(sk, sk+1, (y1,…,yk-1), yk, (yk+1,…,yN)) and decompose it to probability that contains all paths through the Trellis to come to state sk probability for the transition from sk to sk+1 with symbol yk probability that contains all possible pathes from state sk+1 to sN yk … … so s1 sk sk+1 sN-1 sN

The Transition Probability γ We can further decompose the transition probability into  k(sk, sk+1) = P(sk+1|sk) ∙ p(yk|sk, sk+1) Using the index pair (i j) and the set B we get From the channel law yk=vk+nk and the Gaussian distribution we know that k(r0, r3) = 0 as (03)  B  k(r0, r0) = P(xk=+1)∙p(yk|vk=1.63)

The Probability  (Forward) The term k(s) can be computed via the recursion with the initial value 0(s) = P(s0=s). 2(s2) 1(s1) 1(s1,s2) ri rj … so s1 s2 Note: k contains all possible paths leading to sk.

The Probability β (Backward) Analogous, the term βk(s) can be computed via the recursion with the initial value βN(s) = 1 for all s  S. β2(s2) 2(s2,s3) β3(s3) ri rj yk … … s2 s3 sN

The Formula For The LLR Now, we know the APP P(xk = x|y). All we need to accomplish this task is to sum the branch APPs P(sk, sk+1|y) over all branches that correspond to an input symbol xk=x To compute the APP P(xk=+1|y) the branch APPs of the index pairs (00),(12),(20) and (32) have to be summed over -1

The FBA in Matrix Form For convenience, the forward/backward algorithm may also be expressed in matrix-form. We need to create two matrices. Pk  |S|x|S| with {Pk}I,j = k(ri, rj) A(x)  |S|x|S| with A third matrix is created by elementwise multiplication: B(x) = A(x)∙Pk  |S|x|S|

The Algorithm Input: Matrices Pk and Bk(x) We calculate vectors fk  |S|x1 and bk  |S|x1 Initialize with f0 = 1 and bN = 1 For k = 1 to N step 1 (forward) fk = Pk-1fk-1 For k = N to 1 step -1 (backward) bk=PkTbk+1 Output the LLRs:

Soft Processing âk Decoder A natural choice for the soft information s(xk) are the APPs or similarly the LLRs L(ck|y), which are a “side product” of the maximum a-posteriori probability (MAP) symbol detector. Also, the Viterbi equalizer may produce approximations of L(ck|y). For filter-based equalizers extracting s(xk) is more difficult. A common approach is to assume that the estimation error is Gaussian distributed with PDF p(ek)… s(bk) Deinterleaver s(ck) Demapper s(xk) Equalizer / Detector yk

t t Decoding - Basics b2k-1 = ak  ak-2 b2k = ak  ak-1  ak-2 Convert the LLR L(ck|y) back to probabilities: Deinterleave P(ck|y) to P(bk|y) is the input set of probabilities to the decoder With the forward/backward algorithm we may again calculate the LLR L(ak|p) For the example: Encoder of a convolutional code, where each incoming data bit ak yields two code bits (b2k-1, b2k) via b2k-1 = ak  ak-2 b2k = ak  ak-1  ak-2 t t

Decoding - Trellis (1,1) (1,1) (0,1) (0,1) (1,0) (1,0) (0,0) (0,0) state rj state ri input ak=ai,j output (b2k-1, b2k)=(b1,i,j, b2,i,j) The convolutional code yields to a new trellis with branches denoted by the tuple (i, j, ai,j, b1,i,j, b2,i,j). Set B remains {(00),(01),(12),(13),(20),(21),(33),(32)}

Decoding – Formulas (1) {Pk}I,j = k(ri, rj) To apply the forward/backward algorithm, we have to adjust the way Pk and A(x) are formed. For Pk we have to redefine the transition probability . {Pk}I,j = k(ri, rj) Ba(x) = Aa(x)∙Pk

Decoding – Formulas (2) So, we calculate L(ak|p) using the forward / backward algorithm. By changing A(x) we can also calculate L(b2k-1|p) and L(b2k|p) which will later serve as a priori information for the equalizer. L(b2k-1|p): L(b2k|p): with the set of probabilities:

Decoding - Example L(ak|p): L(b2k-1|p): L(b2k|p):

Decoding - Algorithm For decoding, we may use the same forward/backward algorithm with different initialization, as the encoder has to terminate at the zero state at time steps k=0, k=K. Change Bk(x) to output L(b2k-1|p) or L(b2k|p). Input: Matrices Pk and Bk(x) Initialize with f0 = [1 0…0]T  |S|x1 and bN = [1 0…0]T  |S|x1 For k = 1 to N step 1 (forward) fk = Pk-1fk-1 For k = N to 1 step -1 (backward) bk=PkTbk+1 Output the LLRs:

Bit Error Rate (BER) With soft information, we may gain 2dB, but it is still a long way to -1.6 dB. Performance of separate equalization and decoding with hard estimates (dashed lines) or soft information (solid lines). The System transmits K=512 data bits and uses a 16-random interleaver to scramble N=1024 code bits.

Block Diagram - Separated Concept Observations a Posteriori Probabilities Forward/ Backward Algorithm Observations a Posteriori Probabilities Equalizer Forward/ Backward Algorithm Prior Probabilities y Prior Probabilities L(ck|y) Block diagram of the f/b algorithm Let’s look again at the transition propability: . Deinterleaver local evidence about which branch in the trellis was transversed. Prior information L(bk|y) L(bk|p) So far: The equalizer does not have any prior knowledge available, so the formation of entries in Pk relies solely on the observation y. The decoder forms the corresponding entries in Pk without any local observations but entirely based on bitwise probabilities P(bk|y) provided by the equalizer. Decoder Forward/ Backward Algorithm Prior Prob. a Posteriori Probabilities Observations L(ak|p) Decision Rule âk

Block Diagram - Turbo Equalization Observations Observations a Posteriori Probabilities a Posteriori Probabilities Equalizer Forward/ Backward Algorithm Equalizer Forward/ Backward Algorithm y y Turbo Equalization Prior Probabilities Prior Probabilities L(ck|y) L(ck|y) Lext(ck|p) _ + Let’s look again at the transition propability: Extrinsic Information Lext(ck|y) . Interleaver Extrinsic Information Lext(bk|p) Deinterleaver Deinterleaver local evidence about which branch in the trellis was transversed Prior information _ + Lext(bk|y) L(bk|p) L(bk|p) So far: The equalizer does not have any prior knowledge available, so the formation of entries in Pk relies solely on the observation y The decoder forms the corresponding entries in Pk without any local oobservations but entirely based on bitwise probabilities P(bk|y) provided by the equalizer. Decoder Forward/ Backward Algorithm Decoder Forward/ Backward Algorithm Prior Prob. Prior Prob. a Posteriori Probabilities a Posteriori Probabilities Observations Observations L(ak|p) L(ak|p) Decision Rule Decision Rule âk âk

Block Diagram - Comparison Observations Observations Observations a Posteriori Probabilities a Posteriori Probabilities a Posteriori Probabilities Equalizer Forward/ Backward Algorithm Equalizer Forward/ Backward Algorithm Equalizer Forward/ Backward Algorithm y y y Prior Probabilities Prior Probabilities Prior Probabilities L(ck|y) L(ck|y) L(ck|y) Lext(ck|p) _ + Extrinsic Information Lext(ck|y) Interleaver Extrinsic Information Lext(bk|p) Deinterleaver Deinterleaver Deinterleaver _ + Lext(bk|y) L(bk|y) L(bk|p) L(bk|p) L(bk|p) Decoder Forward/ Backward Algorithm Decoder Forward/ Backward Algorithm Decoder Forward/ Backward Algorithm Prior Prob. Prior Prob. Prior Prob. a Posteriori Probabilities a Posteriori Probabilities a Posteriori Probabilities Observations Observations Observations L(ak|p) L(ak|p) L(ak|p) Receiver B: separated equalization and detection Receiver C: Turbo Equalization Decision Rule Decision Rule Decision Rule âk âk âk

Turbo Equalization - Calculation Observations Observations a Posteriori Probabilities a Posteriori Probabilities Equalizer Forward/ Backward Algorithm Equalizer Forward/ Backward Algorithm Caution: We have to split L(ck|y)=Lext(ck|y) + L(ck) as only extrinsic information is fed back. Lext(ck|y) does not depend on L(ck). L(ck) would create direct positive feedback converging usually far from the globally optimal solution. The interleavers are included into the iterative update loop to further disperse the direct feedback effect. The forward/backward algorithm creates locally highly correlated output. These correlations between neighboring symbols are largely suppressed by the interleaver. y y Prior Probabilities Prior Probabilities L(ck|y) L(ck|y) Lext(ck|p) _ + Extrinsic Information Lext(ck|y) Interleaver Extrinsic Information Lext(bk|p) Deinterleaver Deinterleaver _ + Lext(bk|y) L(bk|p) L(bk|p) Decoder Forward/ Backward Algorithm Decoder Forward/ Backward Algorithm Prior Prob. Prior Prob. a Posteriori Probabilities a Posteriori Probabilities Observations Observations L(ak|p) L(ak|p) Decision Rule Decision Rule âk âk

Turbo Equalization - Algorithm Input: Observation sequence y Channel coefficients hl for l=0,1,…,L Initialize: Predetermine the number of iterations T Initialize the sequence of LLRs Lext(c|p) to 0 Compute recursively for T iterations L(c|y) = Forward/Backward(Lext(c|p)) Lext(c|y) = L(c|y) – Lext(c|p) L(b|p) = Forward/Backward(Lext(b|y)) Lext(b|p) = L(b|p) – Lext(b|y) Output: Compute data bit estimates âk from L(ak|y)

Turbo Equalization - BER The system transmits K=512 data bits and uses a 16-random interleaver to scramble N=1024 code bits. Figure A uses separate equalization and detection. Figure B uses turbo MMSE Equalization with 0, 1, 2, 10 iterations. Figure C uses turbo MAP equalization after the same iterations. The line marked with “x” is the performance with K = 25000 and 40-random interleaving after 20 iterations. A B C Turbo MMSE Turbo MAP

Turbo Equalization – Exit Charts [2] Receiver EXIT chart at 4 dB ES/N0 Receiver EXIT chart at 0.8 dB ES/N0

Linear Equalization The computational effort is so far determined by the number of trellis states. An 8-ary alphabet gives 8L states in the trellis. Linear filter-based approaches perform only simple operations on the received symbols, which are usually applied sequentially to a subset of M observed symbols yk. e.g. yk=(yk-5 yk-4 … yk+5)T M=11 A channel of length L can be expressed as with  M x (M+L). Any type of linear processing of yk to compute can be expressed as . The channel law immediately suggests , the zero-forcing approach. With noise present, an estimate is obtained. This approach suffers from “noise enhancement”, which can be severe if is ill conditioned. This effect can be avoided using linear minimum mean square error (MMSE) estimation minimizing . The equation used is It is also possible to nonlinearly process previous estimates to find the besides the linear processing of yk (decision-feedback equalization (DFE).

Complexity [2] Approach Real Multiplications Real Additions MAP Equalizer 3 ∙ 2mM + 2m ∙ 2m(M-1) 3 ∙ 2mM + 2(m-1) ∙ 2m(M-1) exact MMSE LE 16N2 + 4M2 + 10M – 4N - 4 8N2 + 2M2 - 10N + 2M + 4 approx. MMSE LE (I) 4N + 8M 4N + 4M - 4 approx. MMSE LE (II) 10M 10M - 2 MMSE DFE M: Channel impulse response length N: Equalizer filter length 2m: Alphabet length of the signal constellation DFE: Decision Feedback Equalization

Comparison Ideas The MMSE approaches have reduced complexity. The MMSE approaches perform as well as the BER-optimal MAP approach, only requiring a few more iterations. However, the MAP equalizer may handle SNR ranges where all other approaches fail. Ideas treat scenarios with unknown channel characteristics, e. g. combined channel estimation and equalization using a-priori information Switch between MAP and MMSE algorithms depending on fed back soft information

Thank you for your attention! Questions & Comments ?

References [1] Koetter, R.; Singer, A.; Tüchler, M.: Turbo Equalization IEEE Signal Processing Magazine, vol. 21, no. 1, pp 67-80, Jan 2004 [2] Tüchler, M.; Koetter, R.; Singer, A.: Turbo Equalization: Principles and New Results IEEE Trans. Commun., vol. 50, pp. 754-767, May 2002 [3] Tüchler, M.; Singer, A.; Koetter, R.: Minimum Mean Squared Error Equalization Using A-priori Information IEEE Trans. Signal Processing, vol. 50, pp. 673-683, March 2002