Presentation is loading. Please wait.

Presentation is loading. Please wait.

Recursive Least-Squares (RLS) Adaptive Filters

Similar presentations


Presentation on theme: "Recursive Least-Squares (RLS) Adaptive Filters"— Presentation transcript:

1 Recursive Least-Squares (RLS) Adaptive Filters

2 Definition (n) is defined.
With the arrival of new data samples estimates are updated recursively. Introduce a weighting factor to the sum-of-error-squares definition two time-indices n: outer, i: inner Weighting factor Forgetting factor : real, positive, <1, →1 =1 → ordinary LS 1/(1- ): memory of the algorithm (ordinary LS has infinite memory) w(n) is kept fixed during the observation interval 1≤i ≤n for which the cost function (n) is defined.

3 Definition

4 Regularisation LS cost function can be ill-posed
There is insufficient information in the input data to reconstruct the input-output mapping uniquely Uncertainty in the mapping due to measurement noise. To overcome the problem, take ‘prior information’ into account Prewindowing is assumed! (not the covariance method) Regularisation term Smooths and stabilises the solution : regularisation parameter

5 Normal Equations From method of least-squares we know that
then the time-average autocorrelation matrix of the input u(n) becomes Similarly, the time-average cross-correlation vector between the tap inputs and the desired response is (unaffected from regularisation) Hence, the optimum (in the LS sense) filter coefficients should satisfy autocorrelation matrix is always non-singular due to this term. (-1 always exists!)

6 Recursive Computation
Isolate the last term for i=n: Similarly We need to calculate -1 to find w → direct calculation can be costly! Use Matrix Inversion Lemma (MIL)

7 Recursive Least-Squares Algorithm
Let Then, using MIL Now, letting We obtain inverse correlation matrix gain vector Riccati equation

8 Recursive Least-Squares Algorithm
Rearranging How can w be calculated recursively? Let After substituting the recursion for P(n) into the first term we obtain But P(n)u(n)=k(n), hence

9 Recursive Least-Squares Algorithm
The term is called the a priori estimation error, Whereas the term is called the a posteriori estimation error. Summary; the update eqn. -1 is calculated recursively and with scalar division Initialisation: (n=0) If no a priori information exists gain vector a priori error regularisation parameter

10 Recursive Least-Squares Algorithm

11 Recursive Least-Squares Algorithm

12 Ensemble-Average Learning Curve


Download ppt "Recursive Least-Squares (RLS) Adaptive Filters"

Similar presentations


Ads by Google