OPTIMUM FILTERING
WIENER FILTER This is an optimum filter that is based on the minimization of mean square error between the filter output and the a desired signal x(n) v(n) h(n) y(n) Interference due to noise Ideally y(n) should be equal to xd(n) Filter impulse response sd(n) Desired sequence e(n) Error signal s(n) Assuming that h(n) is of length N, the mean square error is defined as By taking the derivative of the mean square error and let it equal to zero, h(n) is solved by
WIENER FILTER When expressed in terms of the autocorrelation function and crosscorrelation function is Since the x(n) is a summation of the true signal s(n) and noise w(n), and sd(n) is not correlated with the noise v(n), then
WIENER FILTER The matrix representation is The h(n) can be obtained by taking the inverse of the autocorrelation matrix and multiply with crosscorrelation matrix. Minimum mean square error is
WIENER FILTER EXAMPLE A signal is defined as follows where v(n) is additive white Gaussian noise with zero mean and variance 0.1. The Wiener filter length is 4.
WIENER FILTER EXAMPLE The matrix representation is By taking inverse matrix, the filter coefficients are Minimum means square error is
WIENER FILTER CONFIGURATION The various configuration of the Wiener filter is often referred as the linear estimation problem. sd(n)=s(n) filtering sd(n)=s(n+D) D>0 signal prediction sd(n)=s(n-D) D>0 signal smoothing The materials presented will focus only on filtering and prediction.
WOLD REPRESENTATION H(z) V(n) X(n) White noise Random process H(z) - all pole, AR process - all zero, MA process - pole-zero, ARMA 1/H(z) V(n) X(n) White noise Random process
AUTOREGRESSIVE PROCESS Difference equation for M th order AR process is where V(n) is white noise with zero mean and variance sv2. The autocorrelation function is Since RVV(m)= sv2d(m), the autocorrelation function for AR process
AUTOREGRESSIVE PROCESS Expanding the autocorrelation function results in The matrix representation The results is known as the Yule-Walker equations.
LINEAR PREDICTION The linear predictive filter is used to predict the model of the underlying random process. Desired sequence h(n-1) X(n) Ideally Y(n) should be equal to Xd(n) Filter impulse response Xd(n) e(n) Error signal Assuming that h(n) is of length P, then one step forward predictor filter is defined as
LINEAR PREDICTION The forward prediction error is The mean square prediction error is By taking the derivative of the mean square error and let it equal to zero, ap(n) is solved by
LINEAR PREDICTION When expressed in terms of the autocorrelation function is The minimum mean square error predictor error is Combining the above two equation results in the augmented normal equations where the solution for the coefficients are derived
LINEAR PREDICTION The matrix representation for the augmented normal equation is where ap(0)=1. For sample functions, the time averaged autocorrelation function is used. The solution is calculated by taking the inverse matrix.
LINEAR PREDICTION EXAMPLE A sample function x(n) is defined by as follows x(n)=[7.0718, 0.3251, -6.5641, 1.3673, 7.1554, 2.011, -6.775, 1.001, 6.7555, -1.050] The biased time-average autocorrelation function is Rxx(l)=[24.39, -0.6904, -19.30, 1.662, 14.624, -2.127, -9.336, 1.617, 4.7433, -0.7425] For 4 th order predictor, the augmented normal equation is
LINEAR PREDICTION EXAMPLE The solution for the predictor filter is ap(n)=[0.1104, 0.0429, 0.0875, -0.0016] The normalized solution is ap(l)=[1, 0.0388, 0.7924, -0.095]