State Estimation Probability, Bayes Filtering Arunkumar Byravan CSE 490R – Lecture 3
Plan Act Sense Interaction loop Sense: Receive sensor data and estimate “state” Plan: Generate long-term plans based on state & goal Act: Apply actions to the robot Plan Act Sense
State Byron Boots – Statistical Robotics at GaTech Markovian assumption: Future is independent of past given present
State Estimation Estimate “state” using sensor data & actions: Robot position and orientation Environment map Location of people, other robots etc. Indoor localization for mobile robots (given map) Fuse information from different sensors (LIDAR, Wheel encoders) Probabilistic filtering Show slide with sense -> plan -> act loop What we’d like to do through sensing -> state estimation What is state? Examples -> for self-driving car it could be it’s own position/orientation in map, detecting people, lane markers, other cars, buildings etc Different types of sensors that can be used: camera, LIDAR, IMU etc One main problem in state estimation -> localization Estimating robot’s own state (position/orientation) in an environment Outdoors: GPS is a good proxy that provides good estimates Indoors: We need to do this explicitly
Why a probabilistic approach? Explicitly represent uncertainty using the calculus of probability theory
Discrete Random Variables X denotes a random variable. X can take on a countable number of values in {x1, x2, …, xn}. p(X = xi), or p(xi), is the probability that the random variable X takes on value xi. p( ) is called probability mass function. E.g. .
Continuous Random Variables X takes on values in the continuum. p(X = x), or p(x) is the probability density function. E.g.
Gaussian PDF
Joint and Conditional Probability P(X = x and Y = y) = P(x, y) If X and Y are independent then P(x, y) = P(x) P(y) P(x | y) is the probability of x given y P(x | y) = P(x , y) / P(y) P(x , y) = P(x | y) P(y) If X and Y are independent then P(x | y) = P(x) Two neighboring laser beams Venn diagram for P(x|y) Independent: P(x|y) = P(x,y)/P(y) = P(x)P(y)/P(y)= P(x)
Law of Total Probability, Marginals Discrete case Continuous case x is laser measurement, want to know P(x). Don’t have a good model for P(x), but a pretty good one for P(x) given the distance to a wall. Now we just have to compute weighted sum over all possible wall positions to get P(x).
Events P(+x, +y) ? P(+x) ? P(-y OR +x) ? Independent? X Y P +x +y 0.2 0.3 -x 0.4 0.1 0.2 0.5 0.6 Independent? P(+x): 0.5 P(+y): 0.6 P(+x,+y)=0.3
Marginal Distributions X P +x -x X Y P +x +y 0.2 -y 0.3 -x 0.4 0.1 Y P +y -y 0.5, 0.5 0.6, 0.4
Conditional Probabilities P(+x | +y) ? P(-x | +y) ? P(-y | +x) ? X Y P +x +y 0.2 -y 0.3 -x 0.4 0.1 1/3 2/3 0.6
Bayes Formula Often causal knowledge is easier to obtain than diagnostic knowledge. Bayes rule allows us to use causal knowledge.
Simple Example of State Estimation Suppose a robot obtains measurement z What is P(open|z)?
Example z raises the probability that the door is open. Do the same for P( not open | z) 0.3 0.5 / 0.6 0.5 + 0.3 0.5 z raises the probability that the door is open.
Normalization Algorithm:
Conditioning Bayes rule and background knowledge: P(x|y,z)) = P(x,y,z) / P(y,z) = P(y|x,z)P(x,z)/P(y,z) = P(y|x,z)P(x|z)P(z) / P(y|z)P(z) = P(y|x,z)P(x|z) / P(y|z) P(x|y) = P(x,y)/P(y) = S_z P(xyz) / P(y) = S_z P(x|y,z)P(y,z) / P(y) = S_z P(x|y,z)Pz|y)P(y)/P(y) = S_z P*x|y,z)P(z}y)
Conditioning Bayes rule and background knowledge: Just replace all conditionals p(x|y) by their p(x,y)/p(y) form.
Conditional Independence Equivalent to and Two laser beams again Proof as homework
Simple Example of State Estimation Suppose our robot obtains another observation z2. What is P(open|z1, z2)?
Recursive Bayesian Updating Markov assumption: zn is conditionally independent of z1,...,zn-1 given x.
Example: Second Measurement z2 lowers the probability that the door is open.
Bayes Filters: Framework Given: Stream of observations z and action data u: Sensor model P(z|x). Action model P(x|u,x’). Prior probability of the system state P(x). Wanted: Estimate of the state X of a dynamical system. The posterior of the state is also called Belief:
Bayes Filters z = observation u = action x = state Bayes Markov Total prob. Markov
Bayes Filter Algorithm Algorithm Bayes_filter ( Bel(x),d ): n=0 If d is a perceptual data item z then For all x do Else if d is an action data item u then Return Bel’(x)
Markov Assumption Underlying Assumptions Static world Independent noise Perfect model, no approximation errors
Bayes Filters for Robot Localization
Bayes Filters are Familiar! Kalman filters Particle filters Hidden Markov models Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs)
Summary Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. Bayes filters are a probabilistic tool for estimating the state of dynamic systems.