State Estimation Probability, Bayes Filtering

Slides:



Advertisements
Similar presentations
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Advertisements

Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Bayesian Robot Programming & Probabilistic Robotics Pavel Petrovič Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
Recursive Bayes Filtering Advanced AI
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
1.Examples of using probabilistic ideas in robotics 2.Reverend Bayes and review of probabilistic ideas 3.Introduction to Bayesian AI 4.Simple example.
Probabilistic Methods in Mobile Robotics. Stereo cameras Infra-red Sonar Laser range-finder Sonar Tactiles.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Part 3 of 3: Beliefs in Probabilistic Robotics. References and Sources of Figures Part 1: Stuart Russell and Peter Norvig, Artificial Intelligence, 2.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
Probability: Review TexPoint fonts used in EMF.
Part 2 of 3: Bayesian Network and Dynamic Bayesian Network.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotics Research Laboratory University of Southern California
Bayesian Filtering for Robot Localization
Probability in Robotics
Markov Localization & Bayes Filtering
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.1: Bayes Filter Jürgen Sturm Technische Universität München.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 5.2: Recap on Probability Theory Jürgen Sturm Technische Universität.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 5.3: Reasoning with Bayes Law Jürgen Sturm Technische Universität.
4 Proposed Research Projects SmartHome – Encouraging patients with mild cognitive disabilities to use digital memory notebook for activities of daily living.
Probability in Robotics Trends in Robotics Research Reactive Paradigm (mid-80’s) no models relies heavily on good sensing Probabilistic Robotics (since.
Probabilistic Robotics Introduction.  Robotics is the science of perceiving and manipulating the physical world through computer-controlled devices.
Probability in Robotics Trends in Robotics Research Reactive Paradigm (mid-80’s) no models relies heavily on good sensing Probabilistic Robotics (since.
Probabilistic Robotics
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
Probabilistic Robotics Introduction. SA-1 2 Introduction  Robotics is the science of perceiving and manipulating the physical world through computer-controlled.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                   
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Pattern Recognition Probability Review
CSE 468/568 Deadlines Lab1 grades out tomorrow (5/1) HW2 grades out by weekend Lab 3 grades out next weekend HW3 – Probability homework out Due 5/7 FINALS:
Review of Probability.
Quick Review Probability Theory
Today.
Probabilistic Robotics
Probabilistic Robotics
Markov ó Kalman Filter Localization
Course: Autonomous Machine Learning
Probabilistic Reasoning Over Time
Introduction to particle filter
Lecture 10: Observers and Kalman Filters
Probability Review 11/22/2018.
CSE-490DF Robotics Capstone
Introduction to particle filter
A Short Introduction to the Bayes Filter and Related Models
Particle Filtering.
EE-565: Mobile Robotics Non-Parametric Filters Module 2, Lecture 5
Probabilistic Map Based Localization
Probability Review 2/24/2019.
Chapter14-cont..
Principle of Bayesian Robot Localization.
Probability in Robotics
Presentation transcript:

State Estimation Probability, Bayes Filtering Arunkumar Byravan CSE 490R – Lecture 3

Plan Act Sense Interaction loop Sense: Receive sensor data and estimate “state” Plan: Generate long-term plans based on state & goal Act: Apply actions to the robot Plan Act Sense

State Byron Boots – Statistical Robotics at GaTech Markovian assumption: Future is independent of past given present

State Estimation Estimate “state” using sensor data & actions: Robot position and orientation Environment map Location of people, other robots etc. Indoor localization for mobile robots (given map) Fuse information from different sensors (LIDAR, Wheel encoders) Probabilistic filtering Show slide with sense -> plan -> act loop What we’d like to do through sensing -> state estimation What is state? Examples -> for self-driving car it could be it’s own position/orientation in map, detecting people, lane markers, other cars, buildings etc Different types of sensors that can be used: camera, LIDAR, IMU etc One main problem in state estimation -> localization Estimating robot’s own state (position/orientation) in an environment Outdoors: GPS is a good proxy that provides good estimates Indoors: We need to do this explicitly

Why a probabilistic approach? Explicitly represent uncertainty using the calculus of probability theory

Discrete Random Variables X denotes a random variable. X can take on a countable number of values in {x1, x2, …, xn}. p(X = xi), or p(xi), is the probability that the random variable X takes on value xi. p( ) is called probability mass function. E.g. .

Continuous Random Variables X takes on values in the continuum. p(X = x), or p(x) is the probability density function. E.g.

Gaussian PDF

Joint and Conditional Probability P(X = x and Y = y) = P(x, y) If X and Y are independent then P(x, y) = P(x) P(y) P(x | y) is the probability of x given y P(x | y) = P(x , y) / P(y) P(x , y) = P(x | y) P(y) If X and Y are independent then P(x | y) = P(x) Two neighboring laser beams Venn diagram for P(x|y) Independent: P(x|y) = P(x,y)/P(y) = P(x)P(y)/P(y)= P(x)

Law of Total Probability, Marginals Discrete case Continuous case x is laser measurement, want to know P(x). Don’t have a good model for P(x), but a pretty good one for P(x) given the distance to a wall. Now we just have to compute weighted sum over all possible wall positions to get P(x).

Events P(+x, +y) ? P(+x) ? P(-y OR +x) ? Independent? X Y P +x +y 0.2 0.3 -x 0.4 0.1 0.2 0.5 0.6 Independent? P(+x): 0.5 P(+y): 0.6 P(+x,+y)=0.3

Marginal Distributions X P +x -x X Y P +x +y 0.2 -y 0.3 -x 0.4 0.1 Y P +y -y 0.5, 0.5 0.6, 0.4

Conditional Probabilities P(+x | +y) ? P(-x | +y) ? P(-y | +x) ? X Y P +x +y 0.2 -y 0.3 -x 0.4 0.1 1/3 2/3 0.6

Bayes Formula Often causal knowledge is easier to obtain than diagnostic knowledge. Bayes rule allows us to use causal knowledge.

Simple Example of State Estimation Suppose a robot obtains measurement z What is P(open|z)?

Example z raises the probability that the door is open. Do the same for P( not open | z) 0.3 0.5 / 0.6 0.5 + 0.3 0.5 z raises the probability that the door is open.

Normalization Algorithm:

Conditioning Bayes rule and background knowledge: P(x|y,z)) = P(x,y,z) / P(y,z) = P(y|x,z)P(x,z)/P(y,z) = P(y|x,z)P(x|z)P(z) / P(y|z)P(z) = P(y|x,z)P(x|z) / P(y|z) P(x|y) = P(x,y)/P(y) = S_z P(xyz) / P(y) = S_z P(x|y,z)P(y,z) / P(y) = S_z P(x|y,z)Pz|y)P(y)/P(y) = S_z P*x|y,z)P(z}y)

Conditioning Bayes rule and background knowledge: Just replace all conditionals p(x|y) by their p(x,y)/p(y) form.

Conditional Independence Equivalent to and Two laser beams again Proof as homework

Simple Example of State Estimation Suppose our robot obtains another observation z2. What is P(open|z1, z2)?

Recursive Bayesian Updating Markov assumption: zn is conditionally independent of z1,...,zn-1 given x.

Example: Second Measurement z2 lowers the probability that the door is open.

Bayes Filters: Framework Given: Stream of observations z and action data u: Sensor model P(z|x). Action model P(x|u,x’). Prior probability of the system state P(x). Wanted: Estimate of the state X of a dynamical system. The posterior of the state is also called Belief:

Bayes Filters z = observation u = action x = state Bayes Markov Total prob. Markov

Bayes Filter Algorithm Algorithm Bayes_filter ( Bel(x),d ): n=0 If d is a perceptual data item z then For all x do Else if d is an action data item u then Return Bel’(x)

Markov Assumption Underlying Assumptions Static world Independent noise Perfect model, no approximation errors

Bayes Filters for Robot Localization

Bayes Filters are Familiar! Kalman filters Particle filters Hidden Markov models Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs)

Summary Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. Bayes filters are a probabilistic tool for estimating the state of dynamic systems.