Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 11: Recursive Parameter Estimation

Similar presentations


Presentation on theme: "Lecture 11: Recursive Parameter Estimation"— Presentation transcript:

1 Lecture 11: Recursive Parameter Estimation
u(t) y(t) ^ Model q(t-1) ^ - + u(t) y(t) Plant q + + Dr Martin Brown Room: E1k Telephone: e(t) EE-M /7: EF L11

2 Lecture 11: Outline In this lecture, we’re going to look at how to perform on-line, recursive parameter estimation The recursive parameter estimation means that training data (x(t), y(t)) is received sequentially. The aim is to use the best estimate of the parameter vector q(t) at each prediction step. The data (x(t), y(t)) will be used to “improve” the parameter vector estimate q(t-1)→q(t) after the prediction is made. The parameters trace out trajectories (with respect to time) and converge to their least squares estimates as time increases, although care needs to be taken to ensure the inputs are persistently exciting (see next lecture). ^ ^ ^ EE-M /7: EF L11

3 Resources Core text Ljung, Chapter 11 On-line notes, Chapter 5
Norton, Chapter 7.3 Introduction In this lecture, we’re looking at how the model’s parameters can be sequentially updated, based on information about the model’s performance. This could be because the model was only partially identified prior to operation (unlikely) or the plant is slowly time-varying or weakly non-linear. EE-M /7: EF L11

4 Introduction to Recursive Parameter Estimation
Recursive parameter estimation is a technique that aims to alter the model’s parameters as data is sequentially received from The Hessian matrix, H(t) = X(t)TX(t), can be calculated at each stage and directly inverted, but this is computationally costly. Recursive parameter estimation is an iterative technique that aims to iteratively, efficiently calculate the new value, based on the previous one x(t) ^ Model q(t-1) y(t) ^ - + x(t) y(t) Plant q + + e(t) EE-M /7: EF L11

5 Rational for Recursive Parameter Estimation
There are many reasons for considering on-line adaptive modelling: The real plant parameters switch or drift over time, possibly due to mechanical wearing or changing environmental conditions The plant is non-linear and a local linear model is used and adaptation is used to account for changing operating point In both cases, however, the basic theory we’re looking at must be adapted as we’re assuming that the optimal model is time invariant and is described by q. EE-M /7: EF L11

6 Preliminary Notation The parameter vector where
At time t, before the parameter vector is updated The information used to update t-1→ t is contained in: EE-M /7: EF L11

7 Next Instant Data The scaled parameter covariance matrices at successive time instants: The cross correlation vectors at successive time instants are related by: Therefore: EE-M /7: EF L11

8 Small Matrix Inversion Lemma
The small matrix inversion lemma states that: The small here does not refer to the fact that the result is not true if the matrix BCD is not small in rank, rather to the fact that this works efficiently when BCD has a small rank. This can be verified by pre/post multiplication by (A+BCD) Now let P(t) = A+BCD, A = P-1(t-1), B = x(t), C = 1, D = xT(t): While this looks complex, it means that to update P(t-1) to P(t), we do not need to invert a matrix (O(n3)), all we need to perform are matrix multiplications and division by the scalar term (O(n2)): EE-M /7: EF L11

9 Iterative, Error-based Parameter Update
There are many ways to combine the update for P and b. The most common way is to define the error variable Then substituting for y(t) in b(t) expression gives: Multiplying by P(t) and substituting gives: This can be interpreted as adding a correction to the current least squares parameter estimate, that depends on: The size of the (scalar) output error The inner product (vector) between the input and covariance matrix EE-M /7: EF L11

10 Recursive Parameter Estimation Algorithm
Form the new input vector x(t) using the new data Form e(t) from the model using Form P(t) using Update the least squares estimate Proceed with next time step EE-M /7: EF L11

11 Initialising Iterative Algorithm
To start the algorithm, we need to know This is typically achieved in two ways: Wait for t-1 time steps and use the data to directly calculate P(t-1), Then use recursive algorithm from step t. Guess the values of and set P(0) = p0I, where p0=1000 is usual. p0 determines the speed of initial adaptation as it represents the initial covariance matrix. A large value means the initial estimates are uncertain. EE-M /7: EF L11

12 Example: Electrical Circuit
Initial RLS parameters: There is a step at zero and 1000 learning iterations where u=1 Note that noise is: randn(‘state’, ); y = y+0.1*randn(1,1); Reasonably fast initial convergence to q  [ ]T, and good setting down to the final values, filtering out the effect of the additive measurement noise EE-M /7: EF L11

13 Summary 11: Recursive Least Squares
Recursive Least Squares is an on-line parameter estimation algorithm that directly calculates the optimal, least squares parameter estimates at each time step. It uses the small matrix lemma to efficiently calculate the new inverse Hessian/parameter covariance matrix P(t), given the previous matrix P(t-1) The iterative parameter estimates has a familiar form To start the algorithm, we need to initialise In the next lecture we’ll look at the concept of persistent excitation and forgetting RLS algorithms An important part of Kalman filtering, where we simultaneously estimate the states EE-M /7: EF L11

14 Laboratory 11: Recursive Least Squares
Theory Verify the small matrix inversion lemma S8 Verify the derivation of the Recursive Least Squares algorithm, S7-10 Matlab 1. Implement the RLS algorithm on the electrical circuit with: with 10 time steps. Plot the parameter trajectories (S12). Try changing p0 and q(0) 2. Increase the number of time steps to 1000, what are the optimal estimated parameter values (S12). EE-M /7: EF L11


Download ppt "Lecture 11: Recursive Parameter Estimation"

Similar presentations


Ads by Google