Download presentation
Presentation is loading. Please wait.
Published bySabrina Gallagher Modified over 9 years ago
1
Kalman filtering techniques for parameter estimation Jared Barber Department of Mathematics, University of Pittsburgh Work with Ivan Yotov and Mark Tronzo March 17, 2011
2
Outline Motivation for Kalman filter Details for Kalman filter Practical example with linear Kalman filter Discussion of other filters – Extended Kalman filter – Stochastic Collocation Kalman filter – Karhunen-Loeve SC Kalman filter Results for simplified NEC model
3
Necrotizing Enterocolitis (NEC) Model Ten nonlinear PDEs Four layers Time consuming simulations for normal computational refinements Approximately fifty free parameters – Diffusion rates – Growth and death rates – Interaction rates
4
Maximum Likelihood Estimate Recall formula (for normal distributions): Disadvantages: – Consider all times simultaneously—larger optimization problem and generally slower – Don’t take into account measurement error and model error separately – To be more efficient: Want derivative information – Get only one answer…not a distribution (which tells you how good your answer is) – May be hard to parallelize
5
Kalman Filter Various versions: Linear KF, Extended KF, Ensemble KF, Stochastic Collocation/Unscented KF, Karhunen-Loeve Kalman Filter Advantages of some of these methods (to a lesser or greater extent) – Consider each time separately – Keep track of best estimates for your parameters (means) and your uncertainties (covariances) – Consider data and measurement error separately – Don’t need derivative information – Easy to parallelize
6
Kalman Filter Picture: Initial State Measurement Model Estimate
7
Kalman Filter Picture: Analysis/Adjustment Step Model Estimate Measurement True Solution Estimate
8
Kalman Filter Picture: Prediction/Forecast Step Model Estimate Measurement True Solution Estimate
9
Kalman Filter Picture: Measurement Step Model Estimate Measurement True Solution Estimate
10
Kalman Filter: Analysis/Adjustment Step Model Estimate Measurement True Solution Estimate Advancing two things: Mean and covariance Adjusted state vector = Model State Vector + K k *(Measured Data-Model Data) K k = f(Model Covariance, Data Covariance)
11
Kalman Filter: General Algorithm, Quantities of interest Measured data = true data plus measurement noise Measurement function Optimal “blending factor” Kalman Gain: Model/forecast and adjusted state vectors Forecast/model function: Best/Analyzed model estimate
12
Kalman Filter: General Algorithm, Uncertainties/Covariances of interest Prescribed: Measured data and application of model covariance Forecast state covariances Adjusted state covariances
13
Kalman Filter: General Algorithm, Kalman Gain Recall to adjust the model’s state vector: Minimize the sum of the uncertainties associated with the adjusted state to find the right blending factor
14
Linear Kalman Filter Special Case: Covariances given by:
15
Example: Exact system Physical system modeled exactly by: Exact solution for this physical system is then:
16
Example: Model Pretend we have a model that we think might work for this system: We have three unknown variables, y 1, y 2, and a. We wish to find a that makes the model fit the data.
17
Example: State vector Define the state vector as: Note, must make reasonable starting guess for unknown parameter. Here we guessed 1.5 which is close to the actual value of 1.
18
Example: Data vector Assume we can actually measure y 1 and y 2 in our system Note our measurement function becomes:
19
Example: Model/forecast function Assuming that the parameter a is a quantity that should not change with time, we can rewrite the system of equations as: Forward euler for our system gives us:
20
Example: Algorithm step through: Initialization Start with state vector data: Start with data vector data:
21
Example: Algorithm step through: Initialization Initial forecast model uncertainty as initial uncertainty in model state vector:
22
Example: Algorithm step through: Obtaining other covariances Use formulas to find other state vector covariances:
23
Example: Algorithm step through: Obtaining other covariances Use formulas to find other state vector covariances:
24
Example: Algorithm step through: Obtaining Kalman Gain Find Kalman Gain:
25
Example: Algorithm step through: Obtaining adjusted state vector Best estimate, note: halfway in between data and model…this is because the data uncertainties and model uncertainties are the same size
26
Example: Algorithm step through: Find adjusted state’s covariance Note: Uncertainties are smaller than other covariances…used both data and model to get an estimate with less uncertainty than before
27
Example: Algorithm step through Predict step, assume dt = 0.1:
28
Example: Algorithm step through New state covariance matrix:
29
Example: Algorithm step through “Measure” data
30
Extended Kalman Filter: Nonlinear equations Special Case: Covariances given by:
31
Problems with KF/Ext KF? KF only works for linear problems. Ext KF – Works for mildly nonlinear problems – Must find and store Jacobians State vector size >> data vector size – Reminder: Need to find – Don’t need any P xx ’s. – P xx ’s can be big and hard to calculate/keep track of
32
Ensemble Kalman Filter: The Ensemble Create an ensemble:
33
Ensemble Kalman Filter: Ensemble Properties The ensemble carries mean and covariance information
34
Ensemble Kalman Filter: New algorithm Linear KF Ensemble KF
35
Ensemble KF Advantages Handles nonlinearity (better than Ext KF) Don’t need Jacobian Don’t need Pxx’s
36
Ensemble KF Disadvantages Need many ensemble members to be accurate – Mean and variances used in algorithm are within O(1/sqrt(q)) of actual (MC integration) – Accurate representation can affect convergence rate and end error on parameter estimate Using many ensemble members requires many model/forecast function evaluations (slow) Can we use fewer points and obtain more accuracy? Note: Don’t need a lot of accuracy, just enough to get the job done.
37
Stochastic Collocation Kalman Filter: Stochastic Collocation Consider the expected value of the function g(z) on stochastic space c i and z i are collocation weights and locations, respectively Collocation exact for linear functions of the components of z and normal pdfs
38
Ensemble and Stochastic Collocation Comparison En mean (q≈1000 pts) SC mean (N≈200 pts)
39
Stochastic Collocation: Kalman Filter Algorithm
40
Stochastic Collocation Advantages and Disadvantages Faster than En KF for small numbers Slower than En KF for large numbers Usually more accurate than En KF Can handle nonlinearities Curse of dimensionality for pdes – 20x20x20 grid needs 16001 collocation points Is there any way to get around this?
41
Stochastic Collocation Kalman Filter with Karhunen-Loeve Expansion On 3-d grids, above methods assume error is independent of location in computational grid Instead, assume error is spatially correlated: Hope: Most of the error is captured by the most dominant eigenfunctions Idea: Keep only the first twenty-five
42
Karhunen-Loeve SC: Kalman Filter Algorithm
43
Simplified Necrotizing Enterocolitis Model: The experiment Epithelial Layer Create wound with pipette Epithelial Layer Wound ≈150 µm
44
Simplified Necrotizing Enterocolitis Model: The Model and Equation
45
Simplified Necrotizing Enterocolitis: Perfect Simulated Data En KF; SC KF; KL SC KF.
46
Simplified Necrotizing Enterocolitis: Imperfect Simulated Data En KF; SC KF; KL SC KF.
47
Simplified Necrotizing Enterocolitis: Real Data En KF; SC KF; KL SC KF.
48
Are parameter estimates good? Produce qualitatively correct results
49
Comparisons With perfect measurements and a pretty good model, SC does best, then KL, then En With imperfect measurements, all are comparable With real data, KL fails. Why? Guess: Too much error associated with D Additional real data info – Gives temporal information about the parameters – Gives uncertainty estimates All run significantly faster than the direct optimization method used
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.