Presentation is loading. Please wait.

Presentation is loading. Please wait.

Probabilistic Robotics

Similar presentations


Presentation on theme: "Probabilistic Robotics"— Presentation transcript:

1 Probabilistic Robotics
Advanced Mobile Robotics Probabilistic Robotics Dr. Jizhong Xiao Department of Electrical Engineering City College of New York

2 Robot Navigation Fundamental problems to provide a mobile robot with autonomous capabilities: Where am I going What’s the best way there? Where have I been?  how to create an environmental map with imperfect sensors? Where am I?  how a robot can tell where it is on a map? What if you’re lost and don’t have a map? Mission Planning Path Planning Mapping Localization Robot SLAM

3 Representation of the Environment
Environment Representation Continuos Metric ® x,y,q Discrete Metric ® metric grid Discrete Topological ® topological grid

4 Localization, Where am I?
Perception Odometry, Dead Reckoning Localization base on external sensors, beacons or landmarks Probabilistic Map Based Localization

5 Localization Methods Mathematic Background, Bayes Filter
Markov Localization: Central idea: represent the robot’s belief by a probability distribution over possible positions, and uses Bayes’ rule and convolution to update the belief whenever the robot senses or moves Markov Assumption: past and future data are independent if one knows the current state Kalman Filtering Central idea: posing localization problem as a sensor fusion problem Assumption: gaussian distribution function Particle Filtering Central idea: Sample-based, nonparametric Filter Monte-Carlo method SLAM (simultaneous localization and mapping) Multi-robot localization

6 Markov Localization Applying probability theory to robot localization
Markov localization uses an explicit, discrete representation for the probability of all position in the state space. This is usually done by representing the environment by a grid or a topological graph with a finite number of possible states (positions). During each update, the probability for each state (element) of the entire space is updated.

7 Markov Localization Example
Assume the robot position is one- dimensional The robot is placed somewhere in the environment but it is not told its location The robot queries its sensors and finds out it is next to a door

8 Markov Localization Example
The robot moves one meter forward. To account for inherent noise in robot motion the new belief is smoother The robot queries its sensors and again it finds itself next to a door

9 Probabilistic Robotics
Falls in between model-based and behavior-based techniques There are models, and sensor measurements, but they are assumed to be incomplete and insufficient for control Statistics provides the mathematical glue to integrate models and sensor measurements Basic Mathematics Probabilities Bayes rule Bayes filters

10 The next slides are provided by the authors of the book "Probabilistic Robotics“, you can download from the website: 

11 Probabilistic Robotics
Mathematic Background Probabilities Bayes rule Bayes filters

12 Probabilistic Robotics
Key idea: Explicit representation of uncertainty using the calculus of probability theory Perception = state estimation Action = utility optimization

13 Axioms of Probability Theory
Pr(A) denotes probability that proposition A is true.

14 A Closer Look at Axiom 3 B

15 Using the Axioms

16 Discrete Random Variables
X denotes a random variable. X can take on a countable number of values in {x1, x2, …, xn}. P(X=xi), or P(xi), is the probability that the random variable X takes on value xi. P( ) is called probability mass function. E.g. .

17 Continuous Random Variables
X takes on values in the continuum. p(X=x), or p(x), is a probability density function. E.g. p(x) x

18 Joint and Conditional Probability
P(X=x and Y=y) = P(x,y) If X and Y are independent then P(x,y) = P(x) P(y) P(x | y) is the probability of x given y P(x | y) = P(x,y) / P(y) P(x,y) = P(x | y) P(y) If X and Y are independent then P(x | y) = P(x)

19 Law of Total Probability, Marginals
Discrete case Continuous case

20 Bayes Formula If y is a new sensor reading
Prior probability distribution Posterior probability distribution Generative model, characteristics of the sensor Does not depend on x

21 Normalization Algorithm:

22 Conditioning Law of total probability:

23 Bayes Rule with Background Knowledge

24 Conditioning Total probability:

25 Conditional Independence
equivalent to and

26 Simple Example of State Estimation
Suppose a robot obtains measurement z What is P(open|z)?

27 Causal vs. Diagnostic Reasoning
P(open|z) is diagnostic. P(z|open) is causal. Often causal knowledge is easier to obtain. Bayes rule allows us to use causal knowledge: count frequencies!

28 Example P(z|open) = 0.6 P(z|open) = 0.3 P(open) = P(open) = 0.5
z raises the probability that the door is open.

29 Combining Evidence Suppose our robot obtains another observation z2.
How can we integrate this new information? More generally, how can we estimate P(x| z1...zn )?

30 Recursive Bayesian Updating
Markov assumption: zn is independent of z1,...,zn-1 if we know x.

31 Example: Second Measurement
P(z2|open) = 0.5 P(z2|open) = 0.6 P(open|z1)=2/3 z2 lowers the probability that the door is open.

32 Actions Often the world is dynamic since change the world.
actions carried out by the robot, actions carried out by other agents, or just the time passing by change the world. How can we incorporate such actions?

33 Typical Actions The robot turns its wheels to move
The robot uses its manipulator to grasp an object Plants grow over time… Actions are never carried out with absolute certainty. In contrast to measurements, actions generally increase the uncertainty.

34 Modeling Actions To incorporate the outcome of an action u into the current “belief”, we use the conditional pdf P(x|u,x’) This term specifies the pdf that executing u changes the state from x’ to x.

35 Example: Closing the door

36 State Transitions P(x|u,x’) for u = “close door”:
If the door is open, the action “close door” succeeds in 90% of all cases.

37 Integrating the Outcome of Actions
Continuous case: Discrete case:

38 Example: The Resulting Belief

39 Bayes Filters: Framework
Given: Stream of observations z and action data u: Sensor model P(z|x). Action model P(x|u,x’). Prior probability of the system state P(x). Wanted: Estimate of the state X of a dynamical system. The posterior of the state is also called Belief:

40 Markov Assumption Markov Assumption:
Measurement probability State transition probability Markov Assumption: past and future data are independent if one knows the current state Underlying Assumptions Static world, Independent noise Perfect model, no approximation errors

41 Bayes Filters z = observation u = action x = state Bayes Markov
Total prob. Markov Markov

42 Bayes Filter Algorithm
Algorithm Bayes_filter( Bel(x),d ): h=0 If d is a perceptual data item z then For all x do Else if d is an action data item u then Return Bel’(x)

43 Bayes Filters are Familiar!
Kalman filters Particle filters Hidden Markov models Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs)

44 Summary Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. Bayes filters are a probabilistic tool for estimating the state of dynamic systems.

45 Break Exercises 2 and 3: pp36~37 of textbook


Download ppt "Probabilistic Robotics"

Similar presentations


Ads by Google