Filtering and State Estimation: Basic Concepts

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Bayesian Belief Propagation
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Mobile Robot Localization and Mapping using the Kalman Filter
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Slide 1 Bayesian Model Fusion: Large-Scale Performance Modeling of Analog and Mixed- Signal Circuits by Reusing Early-Stage Data Fa Wang*, Wangyang Zhang*,
TNO orbit computation: analysing the observed population Jenni Virtanen Observatory, University of Helsinki Workshop on Transneptunian objects - Dynamical.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Hilbert Space Embeddings of Hidden Markov Models Le Song, Byron Boots, Sajid Siddiqi, Geoff Gordon and Alex Smola 1.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
Curve-Fitting Regression
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Comparative survey on non linear filtering methods : the quantization and the particle filtering approaches Afef SELLAMI Chang Young Kim.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
1 September 4, 2003 Bayesian System Identification and Structural Reliability Soheil Saadat, Research Associate Mohammad N. Noori, Professor & Head Department.
Overview G. Jogesh Babu. Probability theory Probability is all about flip of a coin Conditional probability & Bayes theorem (Bayesian analysis) Expectation,
PATTERN RECOGNITION AND MACHINE LEARNING
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Computer vision: models, learning and inference Chapter 19 Temporal models.
EM and expected complete log-likelihood Mixture of Experts
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
Curve-Fitting Regression
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
An Introduction to Kalman Filtering by Arthur Pece
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
Tracking with dynamics
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Overview G. Jogesh Babu. R Programming environment Introduction to R programming language R is an integrated suite of software facilities for data manipulation,
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Why Stochastic Hydrology ?
Probability Theory and Parameter Estimation I
Department of Civil and Environmental Engineering
PSG College of Technology
Course: Autonomous Machine Learning
Hidden Markov Models Part 2: Algorithms
Modelling data and curve fitting
A Tutorial on Bayesian Speech Feature Enhancement
Particle Filtering.
5.4 General Linear Least-Squares
Parametric Methods Berlin Chen, 2005 References:
Yalchin Efendiev Texas A&M University
Uncertainty Propagation
Presentation transcript:

Filtering and State Estimation: Basic Concepts Massimiliano Vasile, Aerospace Centre of Excellence, Department of Mechanical & Aerospace Engineering, University of Strathclyde

UQ – Basic Ingredients The overall UQ process is made of three fundamental elements: An uncertainty model A propagation method An inference process

UQ and State Propagation

Problem Dynamic process: The vectors x and p are not completely known The initial condition vector is affected by uncertainty:

Hidden Markov Process Process transition from one state at stage k-1 to state and stage k: What is the g function? If the state vector and the parameter vector are stochastic variables what is the transition probability? Why Hidden? Because we do not directly observe the state vector or the parameter vector.

Measurements Observation of the system at stage k: The main problem is that observations measure a quantity that is different from the state vector or the parameter vector. The measurement of zk is not exact but is affected by a measurement error. The measurement error depends on our knowledge of the sensor.

Propagated vs Measured State Both propagated state and measured state at stage k need to be combined to reconstruct state and parameter vectors: The relationship between measured quantity and hidden quantity is given by a measurement model.

Probability of a State Start from a priori distribution for the state: From the propagation model we can compute the propagated distribution: and the distribution conditional to the previously measured state:

Probability of a State At stage k a new measurement zk becomes available and we use Bayes to combine the prior with the measured quantity and infer the posterior: The discount term is given by: With likelihood function:

Monte Carlo Simulations (MC) (not an uncertainty propagation method) Build a significant statistics by collecting a sufficient number of outcomes of the simulations. Commonly used to solve multidimensional integrals. By the central limit theorem, the expectation E of a random variable X belongs with probability e to: with This does not say a) how the distribution converges and b) if the distribution is unimodal The mean value might not ‘exist’! The hypothesis on the generation of the samples is very important!!!! Yields the spatial distribution and the probability distribution.

Polynomial Chaos Expansion Response function representation on the quantity of interest: Basis functions chosen to represent the input distribution: The coefficients can be recovered with a least square approach or exploiting the orthogonality of the basis functions: Analytical expressions of statistical moments:

Example: disposal trajectory from L2 to the Moon Vetrisano and Vasile, ASR 2016 Analysis of Spacecraft Disposal Solutions from LPO to the Moon with High Order Polynomial Expansions Trajectory from L2 of the Earth-Sun system to the Moon in a full ephemerides model Monte Carlo Simulation with 1e6 samples vs. PCE degree 6 with 26,000 samples.

Gaussian Mixture Introduced by Garmier et al. and by Terejanu et al. in 2008 for uncertainty propagation was then developed further by Giza et al. and De Mars et al. with specific application to space debris. The idea is to represent the distribution of the quantity of interest with a weighted sum of Gaussians: The covariance and mean value are recovered from the updating step of an Unscented Kalman Filter.

From Gaussian Mixture to Kriging Models One can use a weighted sum of Kernels to build a surrogate of the PDF of the quantity of interest using a Kriging type of approach. The hyper-parameters of the Kriging model are then derived from the solution of a maximum likelihood problem:

HDMR HDMR allows for a direct cheap reconstruction of the quantity of interest. HDMR decomposes the function response, f(x), in a sum of the contributions given by each variable and each one of their interactions through the model. If one considers the contribution of each variable as a variation with respect to an anchored value fc (anchored-HDMR) then the decomposition becomes: Important point: As for PCE the decomposition allows for the identification of the interdependency among variables and the order of the dependency of the quantity of interest on the uncertain parameters

Intrusive Polynomial Chaos Expansions Embed the Polynomial Chaos Expansion in the differential equations: After embedding the expansion in the differential equations one gets: We multiply times and exploit the orthogonality of the basis with the probability distribution. The result is n differential equations to be integrated: Yields the spatial distribution and the probability distribution

State Transition Tensor The local dynamics described by applying a Taylor series expansion Φ solution flow flow from t0 to t. State transition tensors STT are the higher-order partials of the solution Set of non-linear dynamics equations for STT (order 3) Analytical expressions for mean m and covariance matrix P for Gaussian distribution

Polynomial Algebra

Polynomial Algebra

Kalman Filter

Linear Model with Gaussian Noise Linear models for state and measurements: with All distributions are Gaussian and remain Gaussian:

Linear Model with Gaussian Noise Linear models for state and measurements: with All distributions are Gaussian and remain Gaussian:

Linear Model with Gaussian Noise The Kalman gain derives from Bayesian inference:

Particle Filter

Nonlinear Model with Generic Distribution Consider the discrete posterior is given by Ns samples: The weights are giving us the probability of each sample. The question is: how do I update the weights after propagation and inference?

Sampling is a Problem Importance sampling: We can use an auxiliary distribution, q, to generate samples and then calculate the weights using the actual distribution p The problem is that the sampling process might degenerate an not be sufficient to capture the whole distribution.

Unscented Kalman Filter

Unscented Transformation (or Transform) Data fusion and state estimation. Prior distribution of states and measurements:

Unscented Transformation and UQ Builds the covariance matrix of state and measurements assuming a known covariance of process Q and measurement R noise (linear Bayesian model hypothesis) .

Unscented Transformation and UQ Cross correlation of states and measurements and builds the posterior estimation based on the new measurement y. State estimation: Posterior distribution (UNCERTAINTY): Max estimated uncertainty on the covariance

Handling the unknown at the edge of tomorrow http://utopiae.eu twitter.com/utopiae_network info@utopiae.eu