Download presentation
Published byTyrone Anthony Modified over 9 years ago
1
HCI / CprE / ComS 575: Computational Perception
Instructor: Alexander Stoytchev
2
Particle Filters HCI/ComS 575X: Computational Perception
Iowa State University Copyright © Alexander Stoytchev
3
Sebastian Thrun, Wolfram Burgard and Dieter Fox (2005).
Probabilistic Robotics MIT Press.
4
F. Dellaert, D. Fox, W. Burgard, and S. Thrun (1999).
"Monte Carlo Localization for Mobile Robots", IEEE International Conference on Robotics and Automation (ICRA99), May, 1999.
5
A Particle Filter Tutorial for Mobile Robot Localization.
Ioannis Rekleitis (2004). A Particle Filter Tutorial for Mobile Robot Localization. Technical Report TR-CIM-04-02, Centre for Intelligent Machines, McGill University, Montreal, Quebec, Canada.
6
Wednesday
7
Next Week Preliminary Project Presentatons
8
Particle Filters Represent belief by random samples
Estimation of non-Gaussian, nonlinear processes Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter, Particle filter Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96] Computer vision: [Isard and Blake 96, 98] Dynamic Bayesian Networks: [Kanazawa et al., 95]d
9
Example
10
Using Ceiling Maps for Localization
11
Vision-based Localization
h(x) z P(z|x)
12
Under a Light Measurement z: P(z|x):
13
Next to a Light Measurement z: P(z|x):
14
Elsewhere Measurement z: P(z|x):
15
Global Localization Using Vision
16
Sample-based Localization (sonar)
18
Example
19
Importance Sampling with Resampling: Landmark Detection Example
20
Distributions
21
Wanted: samples distributed according to p(x| z1, z2, z3)
Distributions Wanted: samples distributed according to p(x| z1, z2, z3)
22
This is Easy! We can draw samples from p(x|zl) by adding noise to the detection parameters.
23
Importance Sampling with Resampling
Weighted samples After resampling
24
Quick review of Kalman Filters
25
Conditional density of position based on measured value of z1
[Maybeck (1979)]
26
Conditional density of position based on measured value of z1
uncertainty position measured position [Maybeck (1979)]
27
Conditional density of position based on measurement of z2 alone
[Maybeck (1979)]
28
Conditional density of position based on measurement of z2 alone
uncertainty 2 measured position 2 [Maybeck (1979)]
29
Conditional density of position based on data z1 and z2
uncertainty estimate position estimate [Maybeck (1979)]
30
Propagation of the conditional density
[Maybeck (1979)]
31
Propagation of the conditional density
movement vector expected position just prior to taking measurement 3 [Maybeck (1979)]
32
Propagation of the conditional density
movement vector expected position just prior to taking measurement 3 [Maybeck (1979)]
33
Propagation of the conditional density
uncertainty 3 z3 σx(t3) measured position 3
34
Updating the conditional density after the third measurement
position uncertainty z3 σx(t3) x(t3) position estimate
36
Some Questions What if we don’t know the start position of the robot?
What if somebody moves the robot without the robot’s knowledge?
37
Robot Odometry Errors
38
Raw range data, position indexed by odometry
[Thrun, Burgard & Fox (2005)]
39
Resulting Occupancy Grid Map
[Thrun, Burgard & Fox (2005)]
40
Basic Idea Behind Particle Filters
x
41
In 2D it looks like this [
42
Robot Pose
43
Odometry Motion Model
44
Sampling From the Odometry Model
45
Motion Model
46
Motion Model
47
Velocity model for different noise parameters
48
Sampling from the velocity model
49
In Class Demo of Particle Filters
50
Example [Thrun, Burgard & Fox (2005)]
51
Initially we don’t know the location of the robot so we have particles everywhere
52
Next, the robot senses that it is near a door
53
Since there are 3 identical doors the robot can be next any one of them
54
Therefore, we inflate balls (particles) that are next to doors and shrink all others
55
Therefore, we grow balls (particles) that are next to doors and shrink all others
56
Before we continue we have to make all ball to be of equal size
Before we continue we have to make all ball to be of equal size. We need to resample.
57
Before we continue we have to make all ball to be of equal size
Before we continue we have to make all ball to be of equal size. We need to resample.
58
Resampling Rules = = =
59
Resampling Given: Set S of weighted samples.
Wanted : Random sample, where the probability of drawing xi is given by wi. Typically done n times with replacement to generate new sample set S’. [From Thrun’s book “Probabilistik Robotics”]
60
Roulette wheel Resampling
wn Wn-1 w2 w3 w1 wn Wn-1 Stochastic universal sampling Systematic resampling Linear time complexity Easy to implement, low variance Roulette wheel Binary search, n log n [From Thrun’s book “Probabilistik Robotics”]
61
Also called stochastic universal sampling
Resampling Algorithm Algorithm systematic_resampling(S,n): For Generate cdf Initialize threshold For Draw samples … While ( ) Skip until next threshold reached Insert Increment threshold Return S’ Also called stochastic universal sampling [From Thrun’s book “Probabilistik Robotics”]
62
Next, The robot moves to the right
63
… thus, we have to shift all balls (particles) to the right
64
… thus, we have to shift all balls (particles) to the right
65
… and add some position noise
66
… and add some position noise
67
Next, the robot senses that it is next to one of the three doors
68
Next, the robot senses that it is next to one of the three doors
69
Now we have to resample again
70
The robot moves again
71
… so we must move all balls (particles) to the right again
72
… and add some position noise
73
And so on …
74
Now Let’s Compare that With Some of the Other Methods
75
Grid Localization [Thrun, Burgard & Fox (2005)]
76
Grid Localization [Thrun, Burgard & Fox (2005)]
77
Grid Localization [Thrun, Burgard & Fox (2005)]
78
Grid Localization [Thrun, Burgard & Fox (2005)]
79
Grid Localization [Thrun, Burgard & Fox (2005)]
80
Markov Localization [Thrun, Burgard & Fox (2005)]
81
Kalman Filter [Thrun, Burgard & Fox (2005)]
82
Particle Filter [Thrun, Burgard & Fox (2005)]
84
Importance Sampling Ideally, the particles would represent samples drawn from the distribution p(x|z). In practice, we usually cannot get p(x|z) in closed form; in any case, it would usually be difficult to draw samples from p(x|z). We use importance sampling: Particles are drawn from an importance distribution. Particles are weighted by importance weights. [ ]
85
Monte Carlo Samples (Particles)
The posterior distribution p(x|z) may be difficult or impossible to compute in closed form. An alternative is to represent p(x|z) using Monte Carlo samples (particles): Each particle has a value and a weight x x [ ]
86
In 2D it looks like this [
87
Objective-Find p(xk|zk,…,z1)
The objective of the particle filter is to compute the conditional distribution p(xk|zk,…,z1) To do this analytically, we would use the Chapman-Kolmogorov equation and Bayes Theorem along with Markov model assumptions. The particle filter gives us an approximate computational technique. [ ]
88
Initial State Distribution
x0 x0 [ ]
89
State Update x0 x1 = f0 (x0, w0) x1
[ ]
90
Compute Weights p(z1|x1) x1 Before x1 After x1
[ ]
91
Resample x1 x1 [ ]
92
THE END
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.