Maximum a posteriori sequence estimation using Monte Carlo particle filters S. J. Godsill, A. Doucet, and M. West Annals of the Institute of Statistical.

Slides:



Advertisements
Similar presentations
Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal University of Málaga (Spain) Dpt. of System Engineering and Automation May Pasadena,
Advertisements

State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
CSCE643: Computer Vision Bayesian Tracking & Particle Filtering Jinxiang Chai Some slides from Stephen Roth.
Dynamic Bayesian Networks (DBNs)
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Hidden Markov Models and Graphical Models [slides prises du cours cs UC Berkeley (2006 / 2009)]
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
Introduction to Sampling based inference and MCMC Ata Kaban School of Computer Science The University of Birmingham.
CHAPTER 16 MARKOV CHAIN MONTE CARLO
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
A brief Introduction to Particle Filters
Sérgio Pequito Phd Student
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Monte Carlo Localization
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Today Introduction to MCMC Particle filters and MCMC
Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this.
Particle Filtering for Non- Linear/Non-Gaussian System Bohyung Han
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Bayesian Filtering for Robot Localization
Particle Filtering in Network Tomography
1 Miodrag Bolic ARCHITECTURES FOR EFFICIENT IMPLEMENTATION OF PARTICLE FILTERS Department of Electrical and Computer Engineering Stony Brook University.
Markov Localization & Bayes Filtering
Computer vision: models, learning and inference Chapter 19 Temporal models.
Introduction to MCMC and BUGS. Computational problems More parameters -> even more parameter combinations Exact computation and grid approximation become.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
SIS Sequential Importance Sampling Advanced Methods In Simulation Winter 2009 Presented by: Chen Bukay, Ella Pemov, Amit Dvash.
Particle Filtering (Sequential Monte Carlo)
Computer vision: models, learning and inference Chapter 19 Temporal models.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
Probabilistic Robotics Bayes Filter Implementations.
Particle Filters for Shape Correspondence Presenter: Jingting Zeng.
TKK | Automation Technology Laboratory Partially Observable Markov Decision Process (Chapter 15 & 16) José Luis Peralta.
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
« Particle Filtering for Joint Data- Channel Estimation in Fast Fading Channels » Tanya BERTOZZI Didier Le Ruyet, Gilles Rigal and Han Vu-Thien.
Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models Mike West Computing Science and Statistics, Vol. 24, pp , 1993.
Virtual Vector Machine for Bayesian Online Classification Yuan (Alan) Qi CS & Statistics Purdue June, 2009 Joint work with T.P. Minka and R. Xiang.
Sanjay Patil 1 and Ryan Irwin 2 Intelligent Electronics Systems, Human and Systems Engineering Center for Advanced Vehicular Systems URL:
-Arnaud Doucet, Nando de Freitas et al, UAI
Mobile Robot Localization (ch. 7)
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.
CS Statistical Machine learning Lecture 24
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
Mixture Kalman Filters by Rong Chen & Jun Liu Presented by Yusong Miao Dec. 10, 2003.
OBJECT TRACKING USING PARTICLE FILTERS. Table of Contents Tracking Tracking Tracking as a probabilistic inference problem Tracking as a probabilistic.
SLAM Tutorial (Part I) Marios Xanthidis.
Nonlinear State Estimation
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network Arnaud Doucet Nando de Freitas Kevin Murphy Stuart Russell.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks Arnaud Doucet, Nando de Freitas, Kevin Murphy and Stuart Russell CS497EA presentation.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Introduction to Sampling based inference and MCMC
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Today.
Probabilistic Robotics
Probabilistic Reasoning Over Time
Introduction to particle filter
Introduction to particle filter
Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Presentation transcript:

Maximum a posteriori sequence estimation using Monte Carlo particle filters S. J. Godsill, A. Doucet, and M. West Annals of the Institute of Statistical Mathematics Vol. 52, No. 1, 조 동 연

Abstract Performing maximum a posteriori (MAP) sequence estimation in non-linear non-Gaussian dynamic models  A particle cloud representation of the filtering distribution which evolve through time using importance sampling and resampling ideas  MAP sequence estimation is then performed using a classical dynamic programming technique applied to the discretised version of the state space.

Introduction Standard Markovian state-space model R  x t  R n x : unobserved states of the systems R  y t  R n y : observations made over some time interval  f(.|.) and g(.|.): pre-specified densities which may be non-Gaussian and involve non-linearity  f(x 1 | x 0 )  f(x 0 )  x 1:t, y 1:t : collections of observations and states

Joint distribution of states and observations  Markov assumptions  Recursion for this joint distribution  Computing this can only be performed in closed form for linear Gaussian models using the Kalman filer-smoother and for finite state space hidden Markov models.  Approximate numerical techniques

Monte Carlo particle filters  Randomized adaptive grid approximation where the particles evolve randomly in time according to a simulation-based rule   x 0 (dx): the Dirac delta function located at x 0  w t (i) : the weight attached to particle x (i) 1:t, w t (i)  0 and  w t (i) =1  Particles at time t can be updated efficiently to particles at time t+1 using sequential importance sampling and resampling.  Severe depletion of samples over time  There are only a few distinct paths.

MAP estimation  Estimation of the MAP sequence  Marginal fixed-lag MAP sequence  For many applications, it is important to capture the sequence-specific interactions of the states over time in order to make successful inferences.

Maximum a Posteriori sequence estimation Standard methods  Simple sequential optimization method  Sampling (sequentially in time) some paths according to a distribution q(x 1:t )  The choice of q(x 1:t ) will have a huge influence on the performance of the algorithm and the construction of an “optimal” distribution q(x 1:t ) is clearly very difficult.  A reasonable choice for q(x 1:t ) is the posterior distribution p(x 1:t | y 1:t ) or any distribution that has the same global maxima.

 A clear advantage of this method  It is very easy to implement and has computational complexity and storage requirements of order O(NT).  A severe drawback  Because of the degeneracy phenomenon, the performance of this estimate will get worse as time t increase.  A huge number of trajectories is required for reasonable performance, especially for large datasets.

Optimization via dynamic programming  Maximization of p(x 1:t |y 1:t )  The function to maximize is additive.

 Viterbi algorithm

 Maximization of p(x x-L+1:t |y 1:t )  The algorithm proceeds exactly as before, but starting a time t-L+1 and replacing the initial state distribution with p(x x-L+1:t |y 1:t-L ).  Computational complexity: O(N 2 (L+1))  Memory requirements: O(N(L+1))

Examples A non-linear time series

Simulated state sequence Observations

Filtering distribution p(x t |y 1:t ) at time t=14

Evolution of the filtering distribution p(x t |y 1:t ) over time t

Simulated sequence (solid) MMSE estimate (dotted) MAP sequence estimate (dashed)

 Comparisons  Mean log-posterior values of the MAP estimate over 10 data realization  Sample mean log-posterior values and standard deviation over 25 simulations with the same data

 Viterbi algorithm outperforms the standard method and that the robustness in terms of sample variability improves as the number of particles increases.  Because of the degeneracy phenomenon inherent in the standard method, this improvement over the standard methods will get larger and larger as t increases.