Introduction To Equalization. InformationsourcePulsegeneratorTransfilterchannel X(t) ReceiverfilterA/D + Channel noise Channel noisen(t) Digital Processing.

Slides:



Advertisements
Similar presentations
Noise-Predictive Turbo Equalization for Partial Response Channels Sharon Aviran, Paul H. Siegel and Jack K. Wolf Department of Electrical and Computer.
Advertisements

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
AGC DSP AGC DSP Professor A G Constantinides©1 Modern Spectral Estimation Modern Spectral Estimation is based on a priori assumptions on the manner, the.
Adaptive Filters S.B.Rabet In the Name of GOD Class Presentation For The Course : Custom Implementation of DSP Systems University of Tehran 2010 Pages.
Adaptive IIR Filter Terry Lee EE 491D May 13, 2005.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
Maximum Likelihood Sequence Detection (MLSD) and the Viterbi Algorithm
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Performance Optimization
Least-Mean-Square Algorithm CS/CMPE 537 – Neural Networks.
Introduction To Equalization Presented By : Guy Wolf Roy Ron Guy Shwartz (Adavanced DSP, Dr.Uri Mahlab) HAIT
Goals of Adaptive Signal Processing Design algorithms that learn from training data Algorithms must have good properties: attain good solutions, simple.
3F4 Equalisation Dr. I. J. Wassell. Introduction When channels are fixed, we have seen that it is possible to design optimum transmit and receive filters,
Adaptive FIR Filter Algorithms D.K. Wise ECEN4002/5002 DSP Laboratory Spring 2003.
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Chapter 5ELE Adaptive Signal Processing 1 Least Mean-Square Adaptive Filtering.
Digital Communications Fredrik Rusek Chapter 10, adaptive equalization and more Proakis-Salehi.
Collaborative Filtering Matrix Factorization Approach
Dept. of EE, NDHU 1 Chapter Three Baseband Demodulation/Detection.
ECED 4504 Digital Transmission Theory
Equalization in a wideband TDMA system
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Algorithm Taxonomy Thus far we have focused on:
Introduction to Adaptive Digital Filters Algorithms
1 Techniques to control noise and fading l Noise and fading are the primary sources of distortion in communication channels l Techniques to reduce noise.
By Asst.Prof.Dr.Thamer M.Jamel Department of Electrical Engineering University of Technology Baghdad – Iraq.
4/5/00 p. 1 Postacademic Course on Telecommunications Module-3 Transmission Marc Moonen Lecture-6 Adaptive Equalization K.U.Leuven/ESAT-SISTA Module-3.
CE Digital Signal Processing Fall 1992 Waveform Coding Hossein Sameti Department of Computer Engineering Sharif University of Technology.
1 PCM & DPCM & DM. 2 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Definitions Random Signal Analysis (Review) Discrete Random Signals Random.
Unit-V DSP APPLICATIONS. UNIT V -SYLLABUS DSP APPLICATIONS Multirate signal processing: Decimation Interpolation Sampling rate conversion by a rational.
Medicaps Institute of Technology & Management Submitted by :- Prasanna Panse Priyanka Shukla Savita Deshmukh Guided by :- Mr. Anshul Shrotriya Assistant.
Adv DSP Spring-2015 Lecture#9 Optimum Filters (Ch:7) Wiener Filters.
EE513 Audio Signals and Systems
LEAST MEAN-SQUARE (LMS) ADAPTIVE FILTERING. Steepest Descent The update rule for SD is where or SD is a deterministic algorithm, in the sense that p and.
CHAPTER 5 SIGNAL SPACE ANALYSIS
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
Digital Communications Chapeter 3. Baseband Demodulation/Detection Signal Processing Lab.
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
Timo O. Korhonen, HUT Communication Laboratory 1 Convolutional encoding u Convolutional codes are applied in applications that require good performance.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Professors: Eng. Diego Barral Eng. Mariano Llamedo Soria Julian Bruno
Equalization Techniques By: Mohamed Osman Ahmed Mahgoub.
Department of Electrical and Computer Engineering
Neural Networks 2nd Edition Simon Haykin 柯博昌 Chap 3. Single-Layer Perceptrons.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
1 Hyeong-Seok Yu Vada Lab. Hyeong-Seok Yu Vada Lab. Baseband Pulse Transmission Correlative-Level Coding.
Impulse Response Measurement and Equalization Digital Signal Processing LPP Erasmus Program Aveiro 2012 Digital Signal Processing LPP Erasmus Program Aveiro.
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3c: Signal Detection in AWGN.
Computacion Inteligente Least-Square Methods for System Identification.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Institute for Experimental Mathematics Ellernstrasse Essen - Germany DATA COMMUNICATION introduction A.J. Han Vinck May 10, 2003.
Channel Equalization Techniques
Techniques to Mitigate Fading Effects
Techniques to control noise and fading
Adaptive Filters Common filter design methods assume that the characteristics of the signal remain constant in time. However, when the signal characteristics.
Pipelined Adaptive Filters
Equalization in a wideband TDMA system
Collaborative Filtering Matrix Factorization Approach
Equalization in a wideband TDMA system
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
METHOD OF STEEPEST DESCENT
On the Design of RAKE Receivers with Non-uniform Tap Spacing
Presentation transcript:

Introduction To Equalization

InformationsourcePulsegeneratorTransfilterchannel X(t) ReceiverfilterA/D + Channel noise Channel noisen(t) Digital Processing + H T (f) H R (f) Y(t) H c (f) Basic Digital Communication System

Explanation of ISI t f FourierTransform Channel f FourierTransform t TbTbTbTb 2T b 3T b 4T b 5T b t 6T b

Reasons for ISI Channel is band limited in nature Physics – e.g. parasitic capacitance in twisted pairs –limited frequency response –  unlimited time response Tx filter might add ISI when channel spacing is crucial. Channel has multi-path reflections

Channel Model Channel is unknown Channel is usually modeled as Tap-Delay- Line (FIR)

Example for Measured Channels The Variation of the Amplitude of the Channel Taps is Random (changing Multipath)and usually modeled as Railegh distribution in Typical Urban Areas

Equalizer: equalizes the channel – the received signal would seen like it passed a delta response.

Need For Equalization Need For Equalization: –Overcome ISI degradation Need For Adaptive Equalization: –Changing Channel in Time => Objective: Find the Inverse of the Channel Response to reflect a ‘delta channel to the Rx * Applications (or standards recommend us the channel types for the receiver to cope with ).

Zero forcing equalizers (according to Peak Distortion Criterion) TxChEq q x No ISI Equalizer taps is described as matrix Example: 5tap-Equalizer, 2/T sample rate: :Force

Equalizer taps as vector Desired signal as vector XC=q C opt =X -1 q Disadvantages: Ignores presence of additive noise (noise enhancement)

MSE Criterion Mean Square Error between the received signal and the desired signal, filtered by the equalizer filter LS Algorithm LMS Algorithm Desired Signal UnKnown Parameter (Equalizer filter response) Received Signal

LS Least Square Method: –Unbiased estimator –Exhibits minimum variance (optimal) –No probabilistic assumptions (only signal model) –Presented by Guass (1795) in studies of planetary motions)

LS - Theory Derivative according to : MSE:

The minimum LS error would be obtained by substituting 4 to 3: Energy Of Original Signal Energy Of Fitted Signal If Noise Small enough (SNR large enough):Jmin~0 Back-Up

LS : Pros & Cons Advantages: Optimal approximation for the Channel- once calculated it could feed the Equalizer taps. Disadvantages: heavy Processing (due to matrix inversion which by It self is a challenge) Not adaptive (calculated every once in a while and is not good for fast varying channels Adaptive Equalizer is required when the Channel is time variant (changes in time) in order to adjust the equalizer filter tap Weights according to the instantaneous channel properties.

Contents: Introduction - approximating steepest-descent algorithm Steepest descend method Least-mean-square algorithm LMS algorithm convergence stability Numerical example for channel equalization using LMS Summary L EAST -M EAN -S QUARE ALGORITHM

Introduced by Widrow & Hoff in 1959 Simple, no matrices calculation involved in the adaptation In the family of stochastic gradient algorithms Approximation of the steepset – descent method Based on the MMSE criterion.(Minimum Mean square Error) Adaptive process containing two input signals: 1.) Filtering process, producing output signal. 2.) Desired signal (Training sequence) Adaptive process: recursive adjustment of filter tap weights INTRODUCTION

NOTATIONS Input signal (vector): u(n) Autocorrelation matrix of input signal: Ruu = E[u(n)u H (n)] Desired response: d(n) Cross-correlation vector between u(n) and d(n): Pud = E[u(n)d*(n)] Filter tap weights: w(n) Filter output: y(n) = w H (n)u(n) Estimation error: e(n) = d(n) – y(n) Mean Square Error: J = E[|e(n)| 2 ] = E[e(n)e*(n)]

SYSTEM BLOCK USING THE LMS U[n] = Input signal from the channel ; d[n] = Desired Response H[n] = Some training sequence generator e[n] = Error feedback between : A.) desired response. B.) Equalizer FIR filter output W = Fir filter using tap weights vector

STEEPEST DESCENT METHOD Steepest decent algorithm is a gradient based method which employs recursive solution over problem (cost function) The current equalizer taps vector is W(n) and the next sample equalizer taps vector weight is W(n+1), We could estimate the W(n+1) vector by this approximation: The gradient is a vector pointing in the direction of the change in filtercoefficients that will cause the greatest increase in the error signal. Because the goal is to minimize the error, however, the filter coefficients updated in the direction opposite the gradient; that is why the gradient term is negated. The constant μ is a step-size. After repeatedly adjusting each coefficient in the direction opposite to the gradient of the error, the adaptive filter should converge.

Given the following function we need to obtain the vector that would give us the absolute minimum. It is obvious that give us the minimum. STEEPEST DESCENT EXAMPLE Now lets find the solution by the steepest descend method

We start by assuming (C 1 = 5, C 2 = 7) We select the constant. If it is too big, we miss the minimum. If it is too small, it would take us a lot of time to het the minimum. I would select = 0.1. The gradient vector is: STEEPEST DESCENT EXAMPLE So our iterative equation is:

STEEPEST DESCENT EXAMPLE As we can see, the vector [c1,c2] convergates to the value which would yield the function minimum and the speed of this convergence depends on. Initial guess Minimum

MMSE CRITERIA FOR THE LMS MMSE – Minimum mean square error MSE = To obtain the LMS MMSE we should derivative the MSE and compare it to 0:

MMSE CRITERION FOR THE LMS And finally we get: By comparing the derivative to zero we get the MMSE: This calculation is complicated for the DSP (calculating the inverse matrix ), and can cause the system to not being stable cause if there are NULLs in the noise, we could get very large values in the inverse matrix. Also we could not always know the Auto correlation matrix of the input and the cross-correlation vector, so we would like to make an approximation of this.

LMS – APPROXIMATION OF THE STEEPEST DESCENT METHOD W(n+1) = W(n) + 2 * [P – Rw(n)] <= According the MMSE criterion We assume the following assumptions: Input vectors : u(n), u(n-1),…,u(1) statistically independent vectors. Input vector u(n) and desired response d(n), are statistically independent of d(n), d(n-1),…,d(1) Input vector u(n) and desired response d(n) are Gaussian-distributed R.V. Environment is wide-sense stationary; In LMS, the following estimates are used: Ruu^ = u(n)u H (n) – Autocorrelation matrix of input signal Pud^ = u(n)d*(n) - Cross-correlation vector between U[n] and d[n]. *** Or we could calculate the gradient of |e[n]| 2 instead of E{|e[n]| 2 }

LMS ALGORITHM We get the final result:

LMS STABILITY The size of the step size determines the algorithm convergence rate. Too small step size will make the algorithm take a lot of iterations. Too big step size will not convergence the weight taps. Rule Of Thumb: Where, N is the equalizer length Pr, is the received power (signal+noise) that could be estimated in the receiver.

LMS – CONVERGENCE GRAPH This graph illustrates the LMS algorithm. First we start from guessing the TAP weights. Then we start going in opposite the gradient vector, to calculate the next taps, and so on, until we get the MMSE, meaning the MSE is 0 or a very close value to it.(In practice we can not get exactly error of 0 because the noise is a random process, we could only decrease the error below a desired minimum) Example for the Unknown Channel of 2 nd order: Desired Combination of taps

LMS Convergence Vs u

LMS – EQUALIZER EXAMPLE Channel equalization example: Average Square Error as a function of iterations number using different channel transfer function (change of W)

LMS – Advantage: Simplicity of implementation Not neglecting the noise like Zero forcing equalizer By pass the need for calculating an inverse matrix. LMS : Pros & Cons LMS – Disadvantage: Slow Convergence Demands using of training sequence as reference,thus decreasing the communication BW.

Non linear equalization Linear equalization (reminder): Tap delayed equalization Output is linear combination of the equalizer input  as FIR

Non linear equalization – DFE (Decision feedback Equalization) Advantages: copes with larger ISI  as IIR A(z) Receiver detector B(z) + In Output + - The nonlinearity is due the detector characteristics that is fed back (MAPPER) The Decision feedback leads poles in z domain Disadvantages: instability danger

Non linear equalization - DFE

Blind Equalization ZFE and MSE equalizers assume option of training sequence for learning the channel. What happens when there is none? –Blind Equalization Adaptive Equalizer Decision + - InputOutput Error Signal But Usually employs also : Interleaving\DeInterleaving Advanced coding ML criterion Why? Blind Eq is hard and complicated enough! So if you are going to implement it, use the best blocks For decision (detection) and equalizing With LMS

Turbo Equalization MAP Decoder +    MAP Equalizer + Channel Estimator r L D e (c’) L D e (c) L D (c) L D (d) L E e (c) L E e (c’) L E (c’) Iterative : Estimate Equalize Decode ReEncode Usually employs also : Interleaving\DeInterleaving TurboCoding (Advanced iterative code) MAP (based on ML criterion) Why? It is complicated enough! So if you are going to implement it, use the best blocks Next iteration would rely on better estimation therefore would lead more precise equalization

Performance of Turbo Eq Vs Iterations

ML criterion MSE optimizes detection up to 1 st /2 nd order statistics. In Uri’s Class: –Optimum Detection: Strongest Survivor Correlation (MF) (allow optimal performance for Delta ch and Additive noise.  Optimized Detection maximizes prob of detection (minimizes error or Euclidean distance in Signal Space) Lets find the Optimal Detection Criterion while in presence of memory channel (ISI)

ML criterion –Cont. Maximum Likelihood : Maximizes decision probability for the received trellis Example BPSK (NRZI) 2 possible transmitted signals Energy Per Bit Received Signal occupies AWGN Conditional PDF (prob of correct decision on r1 pending s1 was transmitted…) N0/2 Prob of correct decision on a sequence of symbols Transmitted sequence optimal

ML – Cont. With logarithm operation, it could be shown that this is equivalent to : Minimizing the Euclidean distance metric of the sequence: How could this be used? Looks Similar? while MSE minimizes Error (maximizes Prob) for decision on certain Sym, MLSE minimizes Error (maximizes Prob) for decision on certain Trellis ofSym, (Called Metric)

Viterbi Equalizer (On the tip of the tongue) Example for NRZI: Trasmit Symbols: (0=No change in transmitted Symbol (1=Alter Symbol) S0 S1 Metric (Sum of Euclidean Distance)

We Always disqualify one metric for possible S0 and possible S1. Finally we are left with 2 options for possible Trellis. Finally are decide on the correct Trellis with the Euclidean Metric of each or with Apost DATA