Convergence of Sequential Monte Carlo Methods

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Random Processes Introduction (2)
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Tracking Unknown Dynamics - Combined State and Parameter Estimation Tracking Unknown Dynamics - Combined State and Parameter Estimation Presenters: Hongwei.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Chain Rules for Entropy
Markov Chains 1.
PHD Approach for Multi-target Tracking
On Systems with Limited Communication PhD Thesis Defense Jian Zou May 6, 2004.
Machine Learning CUNY Graduate Center Lecture 7b: Sampling.
Sérgio Pequito Phd Student
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Evaluating Hypotheses
Prediction and model selection
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
Comparative survey on non linear filtering methods : the quantization and the particle filtering approaches Afef SELLAMI Chang Young Kim.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Adaptive Signal Processing
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Approximate Inference 2: Monte Carlo Markov Chain
Efficient Partition Trees Jiri Matousek Presented By Benny Schlesinger Omer Tavori 1.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Particle Filtering in Network Tomography
The horseshoe estimator for sparse signals CARLOS M. CARVALHO NICHOLAS G. POLSON JAMES G. SCOTT Biometrika (2010) Presented by Eric Wang 10/14/2010.
Topics on Final Perceptrons SVMs Precision/Recall/ROC Decision Trees Naive Bayes Bayesian networks Adaboost Genetic algorithms Q learning Not on the final:
An Empirical Likelihood Ratio Based Goodness-of-Fit Test for Two-parameter Weibull Distributions Presented by: Ms. Ratchadaporn Meksena Student ID:
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Probabilistic Robotics Bayes Filter Implementations.
Computer Science, Software Engineering & Robotics Workshop, FGCU, April 27-28, 2012 Fault Prediction with Particle Filters by David Hatfield mentors: Dr.
ELEC 303 – Random Signals Lecture 18 – Classical Statistical Inference, Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 4, 2010.
An Efficient Sequential Design for Sensitivity Experiments Yubin Tian School of Science, Beijing Institute of Technology.
Calibrated imputation of numerical data under linear edit restrictions Jeroen Pannekoek Natalie Shlomo Ton de Waal.
-Arnaud Doucet, Nando de Freitas et al, UAI
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Ch. 14: Markov Chain Monte Carlo Methods based on Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009.; C, Andrieu, N, de Freitas,
Computing the value at risk of a portfolio via overlapping hypercubes Marcello Galeotti 1.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
1 Probability and Statistical Inference (9th Edition) Chapter 5 (Part 2/2) Distributions of Functions of Random Variables November 25, 2015.
Diversity Loss in General Estimation of Distribution Algorithms J. L. Shapiro PPSN (Parallel Problem Solving From Nature) ’06 BISCuit 2 nd EDA Seminar.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network Arnaud Doucet Nando de Freitas Kevin Murphy Stuart Russell.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Bias-Variance Analysis in Regression  True function is y = f(x) +  where  is normally distributed with zero mean and standard deviation .  Given a.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
CS498-EA Reasoning in AI Lecture #19 Professor: Eyal Amir Fall Semester 2011.
Univariate Gaussian Case (Cont.)
Wiener Processes and Itô’s Lemma
12. Principles of Parameter Estimation
Model Inference and Averaging
Lebesgue measure: Lebesgue measure m0 is a measure on i.e., 1. 2.
Latent Variables, Mixture Models and EM
Ungraded quiz Unit 6.
Regression Models - Introduction
Sec 21: Analysis of the Euler Method
Monte Carlo Methods in Scientific Computing
STOCHASTIC HYDROLOGY Random Processes
13. The Weak Law and the Strong Law of Large Numbers
LECTURE 09: BAYESIAN LEARNING
6.3 Sampling Distributions
12. Principles of Parameter Estimation
16. Mean Square Estimation
13. The Weak Law and the Strong Law of Large Numbers
Regression Models - Introduction
Presentation transcript:

Convergence of Sequential Monte Carlo Methods Dan Crisan, Arnaud Doucet

Problem Statement X: signal, Y: observation process X satisfies and evolves according to the following equation, Y satisfies

Bayes’ recursion Prediction Updating

A Sequential Monte Carlo Methods Empirical measure Transition kernel Importance distribution : abs. continuous with respect to : strictly positive Radon Nykodym derivative Then is also continuous w.r.t. and

Algorithm Step 1:Sequential importance sampling sample: evaluate normalized importance weights and let

Step 2: Selection step Step 3: MCMC step multiply/discard particles with high/low importance weights to obtain N particles let assoc.empirical measure Step 3: MCMC step sample ,where K is a Markov kernel of invariant distribution and let

Convergence Study denote convergence to 0 of average mean square error under quite general conditions Then prove (almost sure) convergence of toward under more restrictive conditions

Bounds for mean square errors Assumptions 1.-A Importance distribution and weights is assumed abs.continuous with respect to for all is a bounded function in argument define

There exists a constant s. t. for all there exists with s.t. There exists s. t. and a constant s.t.

2.-A Resampling/Selection scheme

First Assumption ensures that Importance function is chosen so that the corresponding importance weights are bounded above. Sampling kernel and importance weights depend “ continuously” on the measure variable. Second assumption ensures that Selection scheme does not introduce too strong a “discrepancy”.

Lemma 1 Lemma 2 Let us assume that for any then after step 1, for any then for any

Lemma 3 Lemma 4 Let us assume that for any then after step 2, for any then for any

Theorem 1 For all , there exists independent of s.t. for any