Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pg 1 of 10 AGI www.agi.com Sherman’s Theorem Fundamental Technology for ODTK Jim Wright.

Similar presentations


Presentation on theme: "Pg 1 of 10 AGI www.agi.com Sherman’s Theorem Fundamental Technology for ODTK Jim Wright."— Presentation transcript:

1 Pg 1 of 10 AGI www.agi.com Sherman’s Theorem Fundamental Technology for ODTK Jim Wright

2 Pg 2 of 10 AGI www.agi.com Why? Satisfaction of Sherman's Theorem guarantees that the mean-squared state estimate error on each state estimate component is minimized

3 Pg 3 of 10 AGI www.agi.com Sherman Probability Density

4 Pg 4 of 10 AGI www.agi.com Sherman Probability Distribution

5 Pg 5 of 10 AGI www.agi.com Notational Convention Here Bold symbols denote known quantities (e.g., denote the optimal state estimate by ΔX k+1|k+1, after processing measurement residual Δy k+1|k ) Non-bold symbols denote true unknown quantities (e.g., the error ΔX k+1|k in propagated state estimate X k+1|k )

6 Pg 6 of 10 AGI www.agi.com Admissible Loss Function L L = L(ΔX k+1|k ) a scalar-valued function of state L(ΔX k+1|k ) ≥ 0; L(0) = 0 L(ΔX k+1|k ) is a non-decreasing function of distance from the origin: lim ΔX → 0 L(ΔX) = 0 L(-ΔX k+1|k ) = L(ΔX k+1|k ) Example of interest (squared state error): L(ΔX k+1|k ) = (ΔX k+1|k ) T (ΔX k+1|k )

7 Pg 7 of 10 AGI www.agi.com Performance Function J(ΔX k+1|k ) J(ΔX k+1|k ) = E{L(ΔX k+1|k )} Goal: Minimize J(ΔX k+1|k ), the mean value of loss on the unknown state error ΔX k+1|k in the propagated state estimate X k+1|k. Example (mean-squared state error): J(ΔX k+1|k ) = E{(ΔX k+1|k ) T (ΔX k+1|k )}

8 Pg 8 of 10 AGI www.agi.com Aurora Response to CME

9 Pg 9 of 10 AGI www.agi.com Minimize Mean-Squared State Error

10 Pg 10 of 10 AGI www.agi.com Sherman’s Theorem Given any admissible loss function L(ΔX k+1|k ), and any Sherman conditional probability distribution function F(ξ|Δy k+1|k ), then the optimal estimate ΔX k+1|k+1 of ΔX k+1|k is the conditional mean: ΔX k+1|k+1 = E{ΔX k+1|k | Δy k+1|k }

11 Pg 11 of 10 AGI www.agi.com Doob’s First Theorem Mean-Square State Error If L(ΔX k+1|k ) = (ΔX k+1|k ) T (ΔX k+1|k ) Then the optimal estimate ΔX k+1|k+1 of ΔX k+1|k is the conditional mean: ΔX k+1|k+1 = E{ΔX k+1|k | Δy k+1|k } The conditional distribution function need not be Sherman; i.e., not symmetric nor convex

12 Pg 12 of 10 AGI www.agi.com Doob’s Second Theorem Gaussian ΔX k+1|k and Δy k+1|k If: ΔX k+1|k and Δy k+1|k have Gaussian probability distribution functions Then the optimal estimate ΔX k+1|k+1 of ΔX k+1|k is the conditional mean: ΔX k+1|k+1 = E{ΔX k+1|k | Δy k+1|k }

13 Pg 13 of 10 AGI www.agi.com Sherman’s Papers Sherman proved Sherman’s Theorem in his 1955 paper. Sherman demonstrated the equivalence in optimal performance using the conditional mean in all three cases, in his 1958 paper

14 Pg 14 of 10 AGI www.agi.com Kalman Kalman’s filter measurement update algorithm is derived from the Gaussian probability distribution function Explicit filter measurement update algorithm not possible from Sherman probability distribution function

15 Pg 15 of 10 AGI www.agi.com Gaussian Hypothesis is Correct Don’t waste your time looking for a Sherman measurement update algorithm Post-filtered measurement residuals are zero mean Gaussian white noise Post-filtered state estimate errors are zero mean Gaussian white noise (due to Kalman’s linear map)

16 Pg 16 of 10 AGI www.agi.com Measurement System Calibration Definition from Gaussian probability density function Radar range spacecraft tracking system example

17 Pg 17 of 10 AGI www.agi.com Gaussian Probability Density N(μ,R 2 ) = N(0,1/4)

18 Pg 18 of 10 AGI www.agi.com Gaussian Probability Distribution N(μ,R 2 ) = N(0,1/4)

19 Pg 19 of 10 AGI www.agi.com Calibration (1) N(μ,R 2 ) = N(0,[σ/σ input ] 2 ) N(μ,R 2 ) = N(0,1) ↔ σ input = σ σ input > σ Histogram peaked relative to N(0,1) Filter gain too large Estimate correction too large Mean-squared state error not minimized

20 Pg 20 of 10 AGI www.agi.com Calibration (2) σ input < σ Histogram flattened relative to N(0,1) Filter gain too small Estimate correction too small Residual editor discards good measurements – information lost Mean-squared state error not minimized

21 Pg 21 of 10 AGI www.agi.com Before Calibration

22 Pg 22 of 10 AGI www.agi.com After Calibration

23 Pg 23 of 10 AGI www.agi.com Nonlinear Real-Time Multidimensional Estimation Requirements - Validation Conclusions - Operations

24 Pg 24 of 10 AGI www.agi.com Requirements (1 of 2) Adopt Kalman’s linear map from measurement residuals to state estimate errors Measurement residuals must be calibrated: Identify and model constant mean biases and variances Estimate and remove time-varying measurement residual biases in real time Process measurements sequentially with time Apply Sherman's Theorem anew at each measurement time

25 Pg 25 of 10 AGI www.agi.com Requirements (2 of 2) Specify a complete state estimate structure Propagate the state estimate with a rigorous nonlinear propagator Apply all known physics appropriately to state estimate propagation and to associated forcing function modeling error covariance Apply all sensor dependent random stochastic measurement sequence components to the measurement covariance model

26 Pg 26 of 10 AGI www.agi.com Necessary & Sufficient Validation Requirements Satisfy rigorous necessary conditions for real data validation Satisfy rigorous sufficient conditions for realistic simulated data validation

27 Pg 27 of 10 AGI www.agi.com Conclusions (1 of 2) Measurement residuals produced by optimal estimators are Gaussian white residuals with zero mean Gaussian white residuals with zero mean imply Gaussian white state estimate errors with zero mean (due to linear map) Sherman's Theorem is satisfied with unbiased Gaussian white residuals and Gaussian white state estimate errors

28 Pg 28 of 10 AGI www.agi.com Conclusions (2 of 2) Sherman's Theorem maps measurement residuals to optimal state estimate error corrections via Kalman's linear measurement update operation Sherman's Theorem guarantees that the mean- squared state estimate error on each state estimate component is minimized Sherman's Theorem applies to all real-time estimation problems that have nonlinear measurement representations and nonlinear state estimate propagations

29 Pg 29 of 10 AGI www.agi.com Operational Capabilities Calculate realistic state estimate error covariance functions (real-time filter and all smoothers) Calculate realistic state estimate accuracy performance assessment (real-time filter and all smoothers) Perform autonomous data editing (real-time filter, near-real-time fixed-lag smoother)


Download ppt "Pg 1 of 10 AGI www.agi.com Sherman’s Theorem Fundamental Technology for ODTK Jim Wright."

Similar presentations


Ads by Google