Download presentation
Presentation is loading. Please wait.
Published byCoral Porter Modified over 9 years ago
1
SIS Sequential Importance Sampling Advanced Methods In Simulation 096320 Winter 2009 Presented by: Chen Bukay, Ella Pemov, Amit Dvash
2
Talk Layout SIS – Overview and algorithm Random walk – SIS simulation Nonlinear Filtering – Overview & Added value Nonlinear Filtering – Simulation
3
Importance Sampling - General Overview Importance Sampling – The most fundamental variance reduction technique Leads to a dramatic variance reduction – particularly when estimating rare event probabilities Target – Expected performance of- Likelihood Ratio Estimator - the sample performance importance sampling density probability density of X
4
SIS - Overview Sequential Importance Sampling Also known as “Dynamic Importance Sampling”. Simply means importance sampling that carried out in sequential manner. Why Sequential? Problematic to sample from multi-dimensional vector Dependency between the variables It is difficult to sample from
5
SIS - Overview Assumptions – X is decomposable can present g(x) – Easy to sample from g(x) sequentially
6
SIS – Overview (cont’) It is easy to generate sequentially from Generate from ….. We get – Due to the product rule of probability we can write - The Likelihood function -
7
SIS – Overview (cont’) Likelihood till time t Likelihood till time t-1
8
SIS – Overview In order to update the likelihood ratio recursively, we need to know how to calculate We know In order to calculate it requires integrating over There options to solve this – Use auxiliary pdfs that can be easy evaluated and each is a good approximation to considered hard integral Easy to calculate Where
9
SIS – Algorithm SIS algorithm (Sequential) 1.For each finite t = 1,…,n, Sample from 2.Compute where and 3.Repeat N times and estimate via SIS algorithm (Dynamic) 1.At time t, arrival of t th sample 2.Sample x t N times according to 3.Calculate 4.estimate according to the existing samples (1,…,t) t = 1,…,n Parallel computing
10
SIS Algorithm - Sequential 1 st sample: 2 nd sample:. N th sample: Calculate Estimate by Computing
11
SIS Algorithm - Dynamic 1 st sample: 2 nd sample:. N th sample: At Time t =1 Calculate recalculate Calculate recalculate At time t =2 At time t=n Estimate by Computing With the existing samples
12
Random Walk Problem statement Reminder - Go forward Probability p Go backward probability q p < q (has drift to - ) Goal – estimating the rare event probability of reaching state K (large number) before 0 (zero) starting at k. 0K12…k start
13
Random Walk – Simulation Result
17
SIS Application: Non Linear Filtering
18
State Space Models x T x1x1 x2x2 x3x3 y2y2 y3y3 y1y1
19
Dynamic Model Measurement Equation State Equation Observation Equation HMM
20
State Space Models cont’ Known pdf - P w Known pdf - P v Markov Property
21
Linear Models Kalman Filter Linear Dynamic models Linear Measurement Equations v, w, x 0 – Gaussian & independent Kalman Filter is the optimal estimator (MSE)
22
Assuming models Motion models - Linear/Non-Linear State Dynamic Linear/Non-Linear Measurement Equations v, u, x 0 – independent, not necessarily Gaussian General Models
23
Problem Description θaθa θbθb θcθc LOP – Line Of Position Observers – Known exact location (x a,,y a ) (x b,y b ) (x c,,y c ) Target – Unknown location (x e,,y e )
24
θaθa θbθb θcθc Bearing Only Measurements (x e,,y e ) (x a,,y a ) (x b,y b ) (x c,,y c )
25
Bearing Only Measurements
26
Non-Linear Filtering Motivation Non linear dynamic/measurement equations Noise distribution not Gauss Kalman Filter: No longer the optimal estimator (MSE) EKF – Linearization of the state space Equations Suboptimal estimator Convergence is not guaranteed
27
The Bootstrap Filter Represent the pdf as a set of rv (and not as a function) The Bootstrap Filter – Recursive algorithm for propagating and updating these rv samples Samples are naturally concentrated in regions of high probability “Novel Approach to nonlinear/non Gaussian Bayesian state estimation” N.J. Gordon, D.J. Salmond & A.F.M Smith
28
Motivation For having P (X(k)|Y(1:k)) MSE ML
29
The Bootstrap Filter Recursive Calculation of P (X(k)|Y(1:k)) Assume we know Bayes & y t |x t independent of y 1:t-1
30
The importance Sampling
31
The Bootstrap Filter Algorithm 1. Initialization: k = 0, Generate x 0 i ~Px 0 i = 1…N 2. Measurement Update: Given y k calculate likelihood for each current sample
32
Algorithm (cont’) 3. Re-Sampling - Sample N samples from {x k *i } i=1:N, with replacement, where the probability to choose the i-th particle is q k i at stage k 4. Prediction: Pass the new samples through the system Equation 5. Set k = k+1 and return to 2 The Bootstrap Filter
33
V x = -0.1 [km/sec] Vy= 0.01 [km/sec] dt = 300 [sec] Measurement Variance ~ 1 o (141,,141) [km] (100,120) [km]
35
1
36
2
37
3
38
4
39
5
40
6
41
7
42
8
44
1
45
2
46
3
47
4
48
5
49
6
50
7
56
Simulations
61
150 time steps
70
Backup
71
Markov Chain Markov property – Given the present state, future states are independent of the past states. The present state fully captures all the information that could influence the future evolution of the process. The changes of state are called transitions, and the probabilities associated with various state-changes are called transition probabilities. 21 0.9 0.1 0.5 0.9 0.1 0.5 0.5 P=
72
F(X) - calculations
73
Where to put Markov PropertyBayes Rule Normalization Constant
74
Where to put 2
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.