Download presentation
1
OPTIMUM FILTERING
2
WIENER FILTER This is an optimum filter that is based on the minimization of mean square error between the filter output and the a desired signal x(n) v(n) h(n) y(n) Interference due to noise Ideally y(n) should be equal to xd(n) Filter impulse response sd(n) Desired sequence e(n) Error signal s(n) Assuming that h(n) is of length N, the mean square error is defined as By taking the derivative of the mean square error and let it equal to zero, h(n) is solved by
3
WIENER FILTER When expressed in terms of the autocorrelation function and crosscorrelation function is Since the x(n) is a summation of the true signal s(n) and noise w(n), and sd(n) is not correlated with the noise v(n), then
4
WIENER FILTER The matrix representation is
The h(n) can be obtained by taking the inverse of the autocorrelation matrix and multiply with crosscorrelation matrix. Minimum mean square error is
5
WIENER FILTER EXAMPLE A signal is defined as follows
where v(n) is additive white Gaussian noise with zero mean and variance 0.1. The Wiener filter length is 4.
6
WIENER FILTER EXAMPLE The matrix representation is
By taking inverse matrix, the filter coefficients are Minimum means square error is
7
WIENER FILTER CONFIGURATION
The various configuration of the Wiener filter is often referred as the linear estimation problem. sd(n)=s(n) filtering sd(n)=s(n+D) D>0 signal prediction sd(n)=s(n-D) D>0 signal smoothing The materials presented will focus only on filtering and prediction.
8
WOLD REPRESENTATION H(z) V(n) X(n) White noise Random process
H(z) - all pole, AR process - all zero, MA process - pole-zero, ARMA 1/H(z) V(n) X(n) White noise Random process
9
AUTOREGRESSIVE PROCESS
Difference equation for M th order AR process is where V(n) is white noise with zero mean and variance sv2. The autocorrelation function is Since RVV(m)= sv2d(m), the autocorrelation function for AR process
10
AUTOREGRESSIVE PROCESS
Expanding the autocorrelation function results in The matrix representation The results is known as the Yule-Walker equations.
11
LINEAR PREDICTION The linear predictive filter is used to predict the model of the underlying random process. Desired sequence h(n-1) X(n) Ideally Y(n) should be equal to Xd(n) Filter impulse response Xd(n) e(n) Error signal Assuming that h(n) is of length P, then one step forward predictor filter is defined as
12
LINEAR PREDICTION The forward prediction error is
The mean square prediction error is By taking the derivative of the mean square error and let it equal to zero, ap(n) is solved by
13
LINEAR PREDICTION When expressed in terms of the autocorrelation function is The minimum mean square error predictor error is Combining the above two equation results in the augmented normal equations where the solution for the coefficients are derived
14
LINEAR PREDICTION The matrix representation for the augmented normal equation is where ap(0)=1. For sample functions, the time averaged autocorrelation function is used. The solution is calculated by taking the inverse matrix.
15
LINEAR PREDICTION EXAMPLE
A sample function x(n) is defined by as follows x(n)=[7.0718, , , , , 2.011, , 1.001, , ] The biased time-average autocorrelation function is Rxx(l)=[24.39, , , 1.662, , , , 1.617, , ] For 4 th order predictor, the augmented normal equation is
16
LINEAR PREDICTION EXAMPLE
The solution for the predictor filter is ap(n)=[0.1104, , , ] The normalized solution is ap(l)=[1, , , ]
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.