Download presentation
Presentation is loading. Please wait.
1
Techniques to Mitigate Fading Effects
المحاضرة الثامنة 5/21/2013 Omar Abu-Ella
2
Introduction Wireless communications require signal processing techniques that improve the link performance. Equalization, Diversity and Channel Coding are channel impairment improvement techniques. 5/21/2013 Omar Abu-Ella
3
Equalization Equalization compensates for Inter Symbol Interference (ISI) created by multipath. Equalizer is a filter at the receiver whose impulse response is the inverse of the channel impulse response. Equalizers find their use in frequency selective fading channels. 5/21/2013 Omar Abu-Ella
4
Diversity Diversity is another technique used to compensate fast/slow fading and is usually implemented using two or more receiving dimensions. Macro-diversity: mitigates large scale fading. Micro-diversity: mitigates small scale fading. Space diversity Time diversity Frequency diversity Angular diversity Polarization diversity 5/21/2013 Omar Abu-Ella
5
Channel Coding Channel coding improves wireless communication link performance by adding redundant data bits in the transmitted message. At the baseband portion of the transmitter, a channel coder maps a digital message sequence into another specific code sequence containing greater number of bits than original contained in the message. Channel Coding is used to correct deep fading or spectral null. 5/21/2013 Omar Abu-Ella
6
General Framework Selective freq. fading Fast\slow fading Deep fading
5/21/2013 Omar Abu-Ella
7
Equalization ISI has been identified as one of the major obstacles to high speed data transmission over mobile radio channels. If the modulation bandwidth exceeds the coherence bandwidth of the radio channel (i.e., frequency selective fading), modulation pulses are spread in time, causing ISI. Classification: Time varying wireless channel requires adaptive equalization. An adaptive equalizers is classified into two major categories: non-blind, blind equalizers. A non-blind adaptive equalizer has two phases of operation: training and tracking. 5/21/2013 Omar Abu-Ella
8
Linear x Nonlinear 5/21/2013 Omar Abu-Ella
9
Adaptive Equalizers 5/21/2013 Omar Abu-Ella
10
Classification of Equalizers
Non-blind x blind equalizers Non-blind adaptive equalization algorithms rely on statistical knowledge about the transmitted signal in order to converge to a solution, i.e., (the optimum filter coefficients “weights”) This is typically accomplished through the use of a pilot training sequence sent over the channel to the receiver to help it identifying the desired signal. 5/21/2013 Omar Abu-Ella
11
Blind adaptive algorithms equalization do not require prior training, and hence they are referred to as “blind” algorithms.” These algorithms attempt to extract significant characteristic of the transmitted signal in order to separate it from other signals in the surrounding environment. 5/21/2013 Omar Abu-Ella
12
Training Sequence: Initially a known, fixed length training sequence is sent by the transmitter so that the receiver equalizer may average to a proper setting. Training sequence is typically a pseudo-random binary signal or a fixed, of prescribed bit pattern. The training sequence is designed to permit an equalizer at the receiver to acquire the proper filter coefficient in the worst possible channel condition. An adaptive filter at the receiver thus uses a recursive algorithm to evaluate the channel and estimate filter coefficients to compensate for the channel. 5/21/2013 Omar Abu-Ella
13
A Mathematical Framework
The signal received by the equalizer is given by d(t) is the transmitted signal, h(t) is the combined impulse response of the transmitter, channel and the RF/IF section of the receiver and nb (t) denotes the baseband noise. The main goal of any equalization process is to satisfy this equation optimally. In frequency domain it can be written as which indicates that an equalizer is actually an inverse filter of the channel. 5/21/2013 Omar Abu-Ella
14
Zero Forcing Equalization
Disadvantage: Since Heq (f) is inverse of Hch (f) so inverse filter may excessively amplify the noise at frequencies where the channel spectrum has high attenuation, so it is rarely used for wireless link except for static channels with high SNR. 5/21/2013 Omar Abu-Ella
15
A Generic Adaptive Equalizer
5/21/2013 Omar Abu-Ella
16
Adaptive equalizer The input to the equalizer as
the tap coefficient vector as the output sequence of the equalizer yk is the inner product of xk and wk i.e. The error signal is defined as 5/21/2013 Omar Abu-Ella
17
The MSE can be expressed as
Assuming dk and xk to be jointly stationary, the Mean Square Error (MSE) is given as The MSE can be expressed as where the signal variance σ2k, d = E[d2k] and the cross correlation vector p between the desired response and the input signal is defined as The input correlation matrix R is defined as an (N + 1) (N + 1) square matrix, where 5/21/2013 Omar Abu-Ella
18
Hence, MMSE is given by the equation
Clearly, MSE is a function of wk. On equating wk to 0, we get the condition for minimum MSE (MMSE) which is known as Wiener solution: Hence, MMSE is given by the equation 5/21/2013 Omar Abu-Ella
19
Choice of Algorithms for Adaptive Equalization
Factors which determine algorithm's performance are: Rate of convergence: Number of iterations required for an algorithm, to converge close enough to optimal solution. Computational complexity: Number of operations required to make one complete iteration of the algorithm. Numerical properties: robustness s against computation errors, which influence the stability of the algorithm. 5/21/2013 Omar Abu-Ella
20
Classic equalizer algorithms
Three classic equalizer algorithms are primitive for most of Zero Forcing Algorithm (ZF) today's wireless standards: Least Mean Square Algorithm (LMS) Recursive Least Square Algorithm (RLS) Constant Modulus Algorithms (CMA) 5/21/2013 Omar Abu-Ella
21
MSE Criterion Unknown Parameter (Equalizer filter response)
Received Signal Desired Signal Mean Square Error between the received signal and the desired signal, filtered by the equalizer filter LS Algorithm LMS Algorithm 5/21/2013 Omar Abu-Ella
22
Least Mean Square (LMS) Algorithm
Introduced by Widrow & Hoff in 1959 Simple, no matrices calculation involved in the adaptation In the family of stochastic gradient algorithms Approximation of the steepest–descent method Based on the Minimum Mean square Error (MMSE) criterion. Adaptive process: recursive adjustment of filter tap weights 5/21/2013 Omar Abu-Ella
23
Least Mean Square (LMS) Algorithm
In practice, the minimization of the MSE is carried out recursively, and may be performed by use of the stochastic gradient algorithm. It is the simplest equalization algorithm and requires only 2N+1 operations per iteration. LMS weights is computed iteratively by where the subscript k denotes the kth delay stage in the equalizer and µ is the step size which controls the convergence rate and stability of the algorithm. 5/21/2013 Omar Abu-Ella
24
Notations Input signal (vector): u(n)
Autocorrelation matrix of input signal: Ruu = E[u(n)uH(n)] Desired response: d(n) Cross-correlation vector between u(n) and d(n): Pud = E[u(n)d*(n)] Filter tap weights: w(n) Filter output: y(n) = wH(n)u(n) Estimation error: e(n) = d(n) – y(n) Mean Square Error: J = E[ |e(n)|2 ] = E[e(n)e*(n)] 5/21/2013 Omar Abu-Ella
25
System Block diagram using LMS
u[n] = Input signal from the channel ; d[n] = Desired Response H[n] = Some training sequence generator e[n] = Error feedback between : A.) desired response. B.) Equalizer FIR filter output W = FIR filter using tap weights vector 5/21/2013 Omar Abu-Ella
26
Steepest Descent Method
Steepest decent algorithm is a gradient based method which employs recursive solution over problem (cost function) The current equalizer taps vector is w(n) and the next sample equalizer taps vector weight is w(n+1), We could estimate the w(n+1) vector by this approximation: The gradient is a vector pointing in the direction of the change in filter coefficients that will cause the greatest increase in the error signal. Because the goal is to minimize the error, however, the filter coefficients updated in the direction opposite the gradient; that is why the gradient term is negated. The constant μ is a step-size. After repeatedly adjusting each coefficient in the direction opposite to the gradient of the error, the adaptive filter should converge. Omar Abu-Ella 5/21/2013
27
Steepest Descent Example
Given the following function we need to obtain the vector that would give us the absolute minimum. It is obvious that give us the minimum. Now lets find the solution by the steepest descend method 5/21/2013 Omar Abu-Ella
28
Steepest Descent Example
We start by assuming (C1 = 5, C2 = 7) We select the constant µ. If it is too big, we miss the minimum. If it is too small, it would take us a lot of time to het the minimum. We would select = 0.1 The gradient vector is: So our iterative equation is: 5/21/2013 Omar Abu-Ella
29
Steepest Descent Example
Initial guess Minimum As we can see, the vector [c1,c2] converges to the value which would yield the function minimum and the speed of this convergence depends on µ. 5/21/2013 Omar Abu-Ella
30
MMSE criterion for LMS MMSE – Minimum mean square error MSE =
To obtain the LMS MMSE we should derivative the MSE and compare it to (0): Omar Abu-Ella 5/21/2013
31
MMSE criterion for LMS finally we get:
By equating the derivative to zero we get the MMSE: This calculation is complicated for the DSP (calculating the inverse matrix), and can cause the system to not being stable because: If there are NULLs in the noise, we could get very large values in the inverse matrix. Also we could not always know the Auto correlation matrix of the input and the cross-correlation vector, so we would like to make an approximation of this. 5/21/2013 Omar Abu-Ella
32
LMS – Approximation of the Steepest Descent Method
w(n+1) = w(n) + 2*[P – R w(n)] <= According the MMSE criterion We assume the following assumptions: Input vectors :u(n), u(n-1),…,u(1) statistically independent vectors. Input vector u(n) and desired response d(n), are statistically independent of d(n), d(n-1),…,d(1) Input vector u(n) and desired response d(n) are Gaussian-distributed R.V. Environment is wide-sense stationary; In LMS, the following estimates are used: R^uu = u(n)uH(n) – Autocorrelation matrix of input signal P^ud = u(n)d*(n) - Cross-correlation vector between u[n] and d[n]. *** Or we could calculate the gradient of |e[n]|2 instead of E{|e[n]|2 } 5/21/2013 Omar Abu-Ella
33
LMS Algorithm We get the final result: Omar Abu-Ella 5/21/2013
34
LMS Step-size The convergence rate of the LMS algorithm is slow due to the fact that there is only one parameter, the step size μ, that controls the adaptation rate. To prevent the adaptation from becoming unstable, the value of μ is chosen from where λi is the ith eigenvalue of the autocorrelation (covariance) matrix R. 5/21/2013 Omar Abu-Ella
35
LMS Stability The size of the step size determines the algorithm convergence rate: Too small step size will make the algorithm take a lot of iterations. Too big step size will not convergence the weight taps. Rule Of Thumb: Where, N is the equalizer length Pr, is the received power (signal+noise) that could be estimated in the receiver. 5/21/2013 Omar Abu-Ella
36
LMS Convergence using different μ
5/21/2013 Omar Abu-Ella
37
LMS : Pros & Cons LMS – Advantage: Simplicity of implementation
Do NOT neglecting the noise like Zero-Forcing equalizer Avoid the need for calculating an inverse matrix. LMS – Disadvantage: Slow Convergence Demands using of training sequence as reference Thus decreasing the communication BW. 5/21/2013 Omar Abu-Ella
38
Recursive Least Squares (RLS)
5/21/2013 Omar Abu-Ella
39
Blind Algorithms “Blind” adaptive algorithms are defined as those algorithms which do not need a reference or training sequence to determine the required complex weight vector. They try to restore some type of property to the received input data vector. A general property of the complex envelope and digital signals is the constant modulus of these received signals 5/21/2013 Omar Abu-Ella
40
Constant Modulus Algorithm (CMA)
used for constant envelope modulation. 5/21/2013 Omar Abu-Ella
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.