Download presentation
Presentation is loading. Please wait.
Published byEric Clarke Modified over 9 years ago
1
Visibility Graph
2
Voronoi Diagram Control is easy: stay equidistant away from closest obstacles
3
Exact Cell Decomposition Plan over this graph
5
Localization Two types of approaches: – Iconic : use raw sensor data directly. Match current sensor readings with what was observed in the past – Feature-based : extract features of the environment, such as corners and doorways. Match current observations
6
Continuous Localization and Mapping Time Initial Map Global Map
7
Continuous Localization and Mapping Time Sensor Data Local Map Global Map MatchingRegistration
8
Continuous Localization and Mapping Time Local Map Global Map MatchingRegistration
9
Continuous Localization and Mapping construct local map match and score global map sensor data local map best pose register
10
Continuous Localization and Mapping construct local map pose estimation match and score global map sensor data encoder data local map k possible poses best pose register
11
Matching X Global Map Local Map probabilities Where am I?
12
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map? Examine different possible robot positions.
13
This sounds hard, do we need to localize? https://www.youtube.com/watch?v=6KRjuuEVEZs
14
Matching and Registration Collect sensor readings and create a local map Estimate poses that the robot is likely to be in given distance traveled from last map update – In theory k is infinite – Discretize the space of possible positions (e.g., consider errors in increments of 5 o ) – Try to model the likely behavior of your robot. Try to account for systematic errors (e.g., robot tends to drift to one side)
15
Matching and Registration What if you were tracking multiple possible poses. How would you combine info from this with previous estimate of global position + odometry?
16
Matching and Registration
18
Representations line-based map (~100 lines)
19
Representations One location vs. location distribution Grid-based map (3000 cells) Topological map (50 features, 18 nodes)
20
Feature-Based Localization Extract features such as doorways, corners and intersections Either – Use continuous localization to try and match features at each update – Use topological information to create a graph of the environment
21
Topological Map of Office Building The robot has identified 10 doorways, marked by single number. Hallways between doorways labeled by gateway-gateway pairing.
22
Topological Map of Office Building What if the robot is told it is at position A but it’s actually at B. How could it correct that information?
23
Localization Problem(s) Position Tracking Global Localization Kidnapped Robot Problem Multi-Robot Localization
24
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation
25
Pose at time t determines the observation at time t If we know the pose, we can say what the observation is But, this is backwards… Hello Bayes!
26
Quiz! If events a and b are independent, p(a, b) = If events a and b are not independent, p(a, b) = p(c|d) = ? Mattel 1992
27
Quiz! If events a and b are independent, p(a, b) = p(a) × p(b) If events a and b are not independent, p(a, b) = p(a) × p(b|a) = p(b) × p (a|b) p(c|d) = p (c, d) / p(d) = p((d|c) p(c)) / p(d)
28
Bayes Filtering Want to have a way of representing uncertainty Probability Distribution – Could be discrete or continuous – Prob. of each pose in set of all possible poses Belief Prior Posterior
29
Models of Belief
30
1.Uniform Prior 2.Observation: see pillar 3.Action: move right 4.Observation: see pillar
31
Modeling objects in the environment http://www.cs.washington.edu/research/rse-lab/projects/mcl
32
Modeling objects in the environment http://www.cs.washington.edu/research/rse-lab/projects/mcl
33
Axioms of Probability Theory
34
A Closer Look at Axiom 3 B
35
(Axiom 3 with ) (by logical equivalence) ∨
36
Discrete Random Variables X denotes a random variable. X can take on a countable number of values in {x 1, x 2, …, x n }. P(X=x i ), or P(x i ), is the probability that the random variable X takes on value x i. P(x i ) is called probability mass function. E.g.
37
Continuous Random Variables x p(x)
38
Probability Density Function Since continuous probability functions are defined for an infinite number of points over a continuous interval, the probability at a single point is always 0. x p(x) Magnitude of curve could be greater than 1 in some areas. The total area under the curve must add up to 1.
39
Joint Probability
40
Conditional Probability is the probability of x given y If X and Y are independent then
41
Inference by Enumeration
42
Law of Total Probability Discrete case Continuous case
43
Bayes Formula Posterior (conditional) probability distribution Prior probability distribution If y is a new sensor reading: Model of the characteristics of the sensor Does not depend on x
44
Bayes Formula
45
Normalization Algorithm:
46
Conditioning Law of total probability:
47
Bayes Rule with Background Knowledge
48
Conditional Independence equivalent to and
49
Simple Example of State Estimation
50
Causal vs. Diagnostic Reasoning Comes from sensor model.
51
Example P(z|open) = 0.6P(z| open) = 0.3 P(open) = P( open) = 0.5
52
Combining Evidence Suppose our robot obtains another observation z 2. How can we integrate this new information? More generally, how can we estimate P(x| z 1...z n ) ?
53
Recursive Bayesian Updating Markov assumption: z n is independent of z 1,...,z n-1 if we know x.
54
Example: 2 nd Measurement P(z 2 |open) = 0.5P(z 2 | open) = 0.6 P(open|z 1 )=2/3
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.