Download presentation
Presentation is loading. Please wait.
Published bySheena Carson Modified over 9 years ago
2
1.Examples of using probabilistic ideas in robotics 2.Reverend Bayes and review of probabilistic ideas 3.Introduction to Bayesian AI 4.Simple example of state estimation – robot and door to pass 5.Simple example of modeling actions 6.Bayes Filters. 7.Probabilistic Robotics Used in Spring 2013
3
Probabilistic Robotics: Sensing and Planning in Robotics
4
Examples of probabilistic ideas in robotics
5
Robotics Yesterday
6
Robotics Today
7
Robotics Tomorrow? More like a human 1.Boolean Logic and Differential equations are based of classical robotics 2.Probabilistic Bayes methods are fundaments of all math in humanities and future robotics.
8
What is robotics today? 1.Definition (Brady): Robotics is the intelligent connection of perception and action Trend to human-like reasoning emotional service robots. Perception, action, reasoning, emotions – all need probability.
9
Trends in Robotics Research Reactive Paradigm (mid-80’s) no models relies heavily on good sensing Probabilistic Robotics (since mid-90’s) seamless integration of models and sensing inaccurate models, inaccurate sensors Hybrids (since 90’s) model-based at higher levels reactive at lower levels Classical Robotics (mid-70’s) exact models no sensing necessary Robots are moving away from factory floors to Entertainment, Toys, Personal service. Medicine, Surgery, Industrial automation (mining, harvesting), Hazardous environments (space, underwater)
10
Examples of robots that need stochastic reasoning and stochastic learning
11
Entertainment Robots: Toys
12
Entertainment Robots: RoboCup
13
Autonomous Flying Machines
14
Humanoids
15
Mobile Robots as Museum Tour- Guides Life is full of uncertainties
16
Tasks to be Solved by Robots Planning Perception Modeling Localization Interaction Acting Manipulation Cooperation Recognition of environment that changes Recognition of human behavior Recognition of human gestures ...
17
Uncertainty is Inherent/Fundamental factors:Uncertainty arises from four major factors: 1.Environment is stochastic, 1.Environment is stochastic, unpredictable 2.Robots actions are stochastic noisy 3.Sensors are limited and noisy inaccurate, incomplete 4.Models are inaccurate, incomplete
18
Nature of Sensor Data Odometry Data Range Data
19
Main probabilistic ideas in robotics
20
Probabilistic Robotics Key idea: Explicit representation of uncertainty using the calculus of probability theory Perception = state estimation Action = utility optimization
21
Advantages and Pitfalls of probabilistic robotics 1.Can accommodate inaccurate models 2.Can accommodate imperfect sensors 3.Robust in real-world applications 4.Best known approach to many hard robotics problems 5.Computationally demanding 6.False assumptions 7.Approximate
22
Introduction to “Bayesian Artificial Intelligence” Reasoning under uncertainty Probabilities Bayesian approach –Bayes’ Theorem – conditionalization –Bayesian decision theory
23
Reasoning under Uncertainty UncertaintyUncertainty – the quality or state of being not clearly known –distinguishes deductive knowledge from inductive belief SourcesSources of uncertainty –Ignorance –Complexity –Physical randomness –Vagueness
24
Reminder of Bayes Formula likelihood
25
Normalization Algorithm: likelihood prior
26
Conditional knowledge has many applications 1.Total probability: 2.Bayes rule and background knowledge: See law of total probability earlier examples
27
I will present many examples of using Bayes probability in mobile robot
28
Simple Example of State Estimation The door opening problem
29
Simple Example of State Estimation Suppose a robot obtains measurement z What is P(open|z)?
30
State Estimation Simple Example of State Estimation Suppose a robot obtains measurement z What is P(open|z)? open What is the probability that door is open if the measurement is z
31
Causal vs. Diagnostic Reasoning P(open|z) is diagnostic. P(z|open) is causal. Often causal knowledge is easier to obtain. Bayes rule allows us to use causal knowledge: count frequencies! We open the door and repeatedly measure z. We count frequencies
32
Examples of calculating probabilities for door opening problem P(z|open) = 0.6P(z| open) = 0.3 P(open) = P( open) = 0.5 z raises the probability that the door is open. Probability that measurement z=open was for door open Probability that measurement z=open was for door not open
33
Combining Evidence Suppose our robot obtains another observation z 2. How can we integrate this new information? More generally, how can we estimate P(x| z 1...z n ) ? What we do when more information will come?
34
Recursive Bayesian Updating Markov assumption: z n is independent of z 1,...,z n-1 if we know x.
35
Example: Second Measurement added in our “robot and door” problem P(z 2 |open) = 0.5P(z 2 | open) = 0.6 P(open|z 1 )= 2/3 z 2 lowers the probability that the door is open. If z2 says that door is open, now our probability that it is open is lower What we do if we have a new measurement result?
36
35 A Typical Pitfall Two possible locations x 1 and x 2 P(x 1 )=0.99 P(z|x 2 )=0.09 P(z|x 1 )=0.07 Number of integrations probabilities x1 x2
37
The behavior/recognition model for the robot should take into account robot actions
38
Actions Change the world? How to use this knowledge? Often the world is dynamic since –actions carried out by the robot change the world, –actions carried out by other agents change the world, –or just the time passing by change the world. How can we incorporate such actions?
39
Typical Actions of a Robot The robot turns its wheels to move The robot uses its manipulator to grasp an object Plants grow over time… Actions are never carried out with absolute certainty. In contrast to measurements, actions generally increase the uncertainty.
40
Modeling Actions Probabilistically To incorporate the outcome of an action u into the current “belief”, we use the conditional pdf P(x|u,x’) This term specifies the pdf that executing u changes the state from x’ to x. pdf = probability distribution function
41
Simple example of modeling Actions
42
Example: Closing the door
43
State Transitions P(x|u,x’) for u = “close door”: If the door is open, the action “close door” succeeds in 90% of all cases. Closing the door succeeded State Transitions for closing door example
44
Integrating the Outcome of Actions Continuous case: Discrete case:
45
Example: The Resulting Belief for the door problem Probability that the door is closed after action u Probability that the door is open after action u Continue open/closed door example Probability that door open
46
Concepts of Probabilistic Robotics 1.Probabilities 1.Probabilities are base concept 2.Bayes rule used in most applications 3.Bayes filters 3.Bayes filters used for estimation 4.Bayes networks 5.Markov Chains 6.Bayesian Decision Theory 7.Bayes concepts in AI Bayes Filters: 1.Kalman Filters 2.Particle Filters 3.Other filters
47
Key idea of Probabilistic Robotics repeated Key idea: Explicit representation of uncertainty using the calculus of probability theory –Perception= state estimation –Action = utility optimization
48
1. Probability Calculus Markov assumption: z n is independent of z 1,...,z n-1 if we know x.
49
48 2. Main ideas of using Bayes in robotics 1.Bayes rule allows us to compute probabilities that are hard to assess otherwise. 2.Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. 3.Bayes filters are a probabilistic tool for estimating the state of dynamic systems.
50
Bayes Filters
51
3. Bayes Filters: Framework Given: –Stream of observations z and action data u: –Sensor model P(z|x). –Action model P(x|u,x’). –Prior probability of the system state P(x). Wanted: –Estimate of the state X of a dynamical system. –The posterior of the state is also called Belief: Markov assumption: z n is independent of z 1,...,z n-1 if we know x.
52
Bayes Filters Bayes Markov Total prob. Markov z = observation u = action x = state Derivation of rule for belief We derive the posterior of the state
53
1. Algorithm Bayes_filter( Bel(x),d ): 2. 0 3. If d is a perceptual data item z then 4. For all x do 5. 6. For all x do 7. 8. 9.Else if d is an action data item u then 10. For all x do 11. 12. Return Bel’(x) We derived an important formula: Calculate new belief Update new belief for all knowledge Calculate new belief after new action for all knowledge Now we use it in Bayes Filter
54
Bayes Filters are fundaments of many methods! Kalman filters Particle filters Hidden Markov models (HMM) Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs)
55
4. Bayesian Networks and Markov Models – main concepts Bayesian AI Bayesian networks Decision networks Reasoning about changes over time Dynamic Bayesian Networks Markov models
56
5. Markov Assumption and HMMs Underlying Assumptions of HMMs Static world Independent noise Perfect model, no approximation errors states controls outputs
57
6. Bayesian Decision Theory 1.Frank Ramsey (1926) 2.Decision making under uncertainty – what action to take when the state of the world is unknown 3.Bayesian answer – Find the utility of each possible outcome (action- state pair), and take the action that maximizes expected utility
58
Story of my friend how he wanted to get married scientifically Bayesian Decision Theory – Example ActionRain (p=0.4)Shine (1-p=0.6) Take umbrella 3010 Leave umbrella -10050 Expected utilities Expected utilities : E(Take umbrella) = 30 0.4+10 0.6=18 E(Leave umbrella) = -100 0.4+50 0.6=-10
59
7. Bayesian Conception of an AI 1.An autonomous agent that 1.has a utility structure (preferences) 2.can learn about its world and the relationship (probabilities) between its actions and future states maximizes its expected utility 2.The techniques used to learn about the world are mainly statistical Data mining
60
Conclusion on Bayesian AI Reasoning under uncertainty Probabilities Bayesian approach –Bayes’ Theorem – conditionalization –Bayesian decision theory
61
Summary Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. Bayes filters are a probabilistic tool for estimating the state of dynamic systems.
63
Sources Gaurav S. Sukhatme Computer Science Robotics Research Laboratory University of Southern California
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.