Pg 1 of 10 AGI www.agi.com Sherman’s Theorem Fundamental Technology for ODTK Jim Wright.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
Mobile Robot Localization and Mapping using the Kalman Filter
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Robot Localization Using Bayesian Methods
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
The loss function, the normal equation,
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Kalman’s Beautiful Filter (an introduction) George Kantor presented to Sensor Based Planning Lab Carnegie Mellon University December 8, 2000.
1 Introduction to Kalman Filters Michael Williams 5 June 2003.
Point estimation, interval estimation
Minimaxity & Admissibility Presenting: Slava Chernoi Lehman and Casella, chapter 5 sections 1-2,7.
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Independent Component Analysis (ICA) and Factor Analysis (FA)
Chess Review May 11, 2005 Berkeley, CA Closing the loop around Sensor Networks Bruno Sinopoli Shankar Sastry Dept of Electrical Engineering, UC Berkeley.
Prepared By: Kevin Meier Alok Desai
Probabilistic Robotics
Kalman Filtering Jur van den Berg. Kalman Filtering (Optimal) estimation of the (hidden) state of a linear dynamic process of which we obtain noisy (partial)
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Modern Navigation Thomas Herring
ROBOT MAPPING AND EKF SLAM
Slam is a State Estimation Problem. Predicted belief corrected belief.
Kalman filter and SLAM problem
Calibration Guidelines 1. Start simple, add complexity carefully 2. Use a broad range of information 3. Be well-posed & be comprehensive 4. Include diverse.
© by Yu Hen Hu 1 ECE533 Digital Image Processing Image Restoration.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Young Ki Baik, Computer Vision Lab.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Definitions Random Signal Analysis (Review) Discrete Random Signals Random.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2014 Professor Brandon A. Jones Lecture 26: Singular Value Decomposition.
The “ ” Paige in Kalman Filtering K. E. Schubert.
Image Restoration.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
2 Introduction to Kalman Filters Michael Williams 5 June 2003.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
The Simple Linear Regression Model: Specification and Estimation ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s.
1 Assignment, Project and Presentation Mobile Robot Localization by using Particle Filter by Chong Wang, Chong Fu, and Guanghui Luo. Tracking, Mapping.
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
An Introduction To The Kalman Filter By, Santhosh Kumar.
Machine Learning 5. Parametric Methods.
Using Kalman Filter to Track Particles Saša Fratina advisor: Samo Korpar
Tracking with dynamics
By: Aaron Dyreson Supervising Professor: Dr. Ioannis Schizas
Extended Kalman Filter
Statistics Presentation Ch En 475 Unit Operations.
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
Paging Area Optimization Based on Interval Estimation in Wireless Personal Communication Networks By Z. Lei, C. U. Saraydar and N. B. Mandayam.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Robust Localization Kalman Filter & LADAR Scans
Computacion Inteligente Least-Square Methods for System Identification.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
CWR 6536 Stochastic Subsurface Hydrology Optimal Estimation of Hydrologic Parameters.
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
Deep Feedforward Networks
(5) Notes on the Least Squares Estimate
ASEN 5070: Statistical Orbit Determination I Fall 2014
Ch3: Model Building through Regression
Kalman Filtering: Control with Limited/Noisy Measurements
The regression model in matrix form
Bayes and Kalman Filter
Extended Kalman Filter
Extended Kalman Filter
Presentation transcript:

Pg 1 of 10 AGI Sherman’s Theorem Fundamental Technology for ODTK Jim Wright

Pg 2 of 10 AGI Why? Satisfaction of Sherman's Theorem guarantees that the mean-squared state estimate error on each state estimate component is minimized

Pg 3 of 10 AGI Sherman Probability Density

Pg 4 of 10 AGI Sherman Probability Distribution

Pg 5 of 10 AGI Notational Convention Here Bold symbols denote known quantities (e.g., denote the optimal state estimate by ΔX k+1|k+1, after processing measurement residual Δy k+1|k ) Non-bold symbols denote true unknown quantities (e.g., the error ΔX k+1|k in propagated state estimate X k+1|k )

Pg 6 of 10 AGI Admissible Loss Function L L = L(ΔX k+1|k ) a scalar-valued function of state L(ΔX k+1|k ) ≥ 0; L(0) = 0 L(ΔX k+1|k ) is a non-decreasing function of distance from the origin: lim ΔX → 0 L(ΔX) = 0 L(-ΔX k+1|k ) = L(ΔX k+1|k ) Example of interest (squared state error): L(ΔX k+1|k ) = (ΔX k+1|k ) T (ΔX k+1|k )

Pg 7 of 10 AGI Performance Function J(ΔX k+1|k ) J(ΔX k+1|k ) = E{L(ΔX k+1|k )} Goal: Minimize J(ΔX k+1|k ), the mean value of loss on the unknown state error ΔX k+1|k in the propagated state estimate X k+1|k. Example (mean-squared state error): J(ΔX k+1|k ) = E{(ΔX k+1|k ) T (ΔX k+1|k )}

Pg 8 of 10 AGI Aurora Response to CME

Pg 9 of 10 AGI Minimize Mean-Squared State Error

Pg 10 of 10 AGI Sherman’s Theorem Given any admissible loss function L(ΔX k+1|k ), and any Sherman conditional probability distribution function F(ξ|Δy k+1|k ), then the optimal estimate ΔX k+1|k+1 of ΔX k+1|k is the conditional mean: ΔX k+1|k+1 = E{ΔX k+1|k | Δy k+1|k }

Pg 11 of 10 AGI Doob’s First Theorem Mean-Square State Error If L(ΔX k+1|k ) = (ΔX k+1|k ) T (ΔX k+1|k ) Then the optimal estimate ΔX k+1|k+1 of ΔX k+1|k is the conditional mean: ΔX k+1|k+1 = E{ΔX k+1|k | Δy k+1|k } The conditional distribution function need not be Sherman; i.e., not symmetric nor convex

Pg 12 of 10 AGI Doob’s Second Theorem Gaussian ΔX k+1|k and Δy k+1|k If: ΔX k+1|k and Δy k+1|k have Gaussian probability distribution functions Then the optimal estimate ΔX k+1|k+1 of ΔX k+1|k is the conditional mean: ΔX k+1|k+1 = E{ΔX k+1|k | Δy k+1|k }

Pg 13 of 10 AGI Sherman’s Papers Sherman proved Sherman’s Theorem in his 1955 paper. Sherman demonstrated the equivalence in optimal performance using the conditional mean in all three cases, in his 1958 paper

Pg 14 of 10 AGI Kalman Kalman’s filter measurement update algorithm is derived from the Gaussian probability distribution function Explicit filter measurement update algorithm not possible from Sherman probability distribution function

Pg 15 of 10 AGI Gaussian Hypothesis is Correct Don’t waste your time looking for a Sherman measurement update algorithm Post-filtered measurement residuals are zero mean Gaussian white noise Post-filtered state estimate errors are zero mean Gaussian white noise (due to Kalman’s linear map)

Pg 16 of 10 AGI Measurement System Calibration Definition from Gaussian probability density function Radar range spacecraft tracking system example

Pg 17 of 10 AGI Gaussian Probability Density N(μ,R 2 ) = N(0,1/4)

Pg 18 of 10 AGI Gaussian Probability Distribution N(μ,R 2 ) = N(0,1/4)

Pg 19 of 10 AGI Calibration (1) N(μ,R 2 ) = N(0,[σ/σ input ] 2 ) N(μ,R 2 ) = N(0,1) ↔ σ input = σ σ input > σ Histogram peaked relative to N(0,1) Filter gain too large Estimate correction too large Mean-squared state error not minimized

Pg 20 of 10 AGI Calibration (2) σ input < σ Histogram flattened relative to N(0,1) Filter gain too small Estimate correction too small Residual editor discards good measurements – information lost Mean-squared state error not minimized

Pg 21 of 10 AGI Before Calibration

Pg 22 of 10 AGI After Calibration

Pg 23 of 10 AGI Nonlinear Real-Time Multidimensional Estimation Requirements - Validation Conclusions - Operations

Pg 24 of 10 AGI Requirements (1 of 2) Adopt Kalman’s linear map from measurement residuals to state estimate errors Measurement residuals must be calibrated: Identify and model constant mean biases and variances Estimate and remove time-varying measurement residual biases in real time Process measurements sequentially with time Apply Sherman's Theorem anew at each measurement time

Pg 25 of 10 AGI Requirements (2 of 2) Specify a complete state estimate structure Propagate the state estimate with a rigorous nonlinear propagator Apply all known physics appropriately to state estimate propagation and to associated forcing function modeling error covariance Apply all sensor dependent random stochastic measurement sequence components to the measurement covariance model

Pg 26 of 10 AGI Necessary & Sufficient Validation Requirements Satisfy rigorous necessary conditions for real data validation Satisfy rigorous sufficient conditions for realistic simulated data validation

Pg 27 of 10 AGI Conclusions (1 of 2) Measurement residuals produced by optimal estimators are Gaussian white residuals with zero mean Gaussian white residuals with zero mean imply Gaussian white state estimate errors with zero mean (due to linear map) Sherman's Theorem is satisfied with unbiased Gaussian white residuals and Gaussian white state estimate errors

Pg 28 of 10 AGI Conclusions (2 of 2) Sherman's Theorem maps measurement residuals to optimal state estimate error corrections via Kalman's linear measurement update operation Sherman's Theorem guarantees that the mean- squared state estimate error on each state estimate component is minimized Sherman's Theorem applies to all real-time estimation problems that have nonlinear measurement representations and nonlinear state estimate propagations

Pg 29 of 10 AGI Operational Capabilities Calculate realistic state estimate error covariance functions (real-time filter and all smoothers) Calculate realistic state estimate accuracy performance assessment (real-time filter and all smoothers) Perform autonomous data editing (real-time filter, near-real-time fixed-lag smoother)