EE-565: Mobile Robotics Non-Parametric Filters Module 2, Lecture 5

Slides:



Advertisements
Similar presentations
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
Advertisements

Mapping with Known Poses
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Bayesian Robot Programming & Probabilistic Robotics Pavel Petrovič Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Recursive Bayes Filtering Advanced AI
1.Examples of using probabilistic ideas in robotics 2.Reverend Bayes and review of probabilistic ideas 3.Introduction to Bayesian AI 4.Simple example.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Probability: Review TexPoint fonts used in EMF.
Probabilistic Robotics
Particle Filter/Monte Carlo Localization
Monte Carlo Localization
Probabilistic Robotics
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
SA-1 Probabilistic Robotics Mapping with Known Poses.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
SA-1 Probabilistic Robotics Bayes Filter Implementations Discrete filters.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
HCI / CprE / ComS 575: Computational Perception
Bayesian Filtering for Robot Localization
Mobile Robot controlled by Kalman Filter
Markov Localization & Bayes Filtering
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Probabilistic Robotics: Monte Carlo Localization
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Probabilistic Robotics Bayes Filter Implementations.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.1: Bayes Filter Jürgen Sturm Technische Universität München.
Mobile Robot Localization (ch. 7)
Robot Mapping Short Introduction to Particle Filters and Monte Carlo Localization.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 5.3: Reasoning with Bayes Law Jürgen Sturm Technische Universität.
4 Proposed Research Projects SmartHome – Encouraging patients with mild cognitive disabilities to use digital memory notebook for activities of daily living.
CSE-473 Project 2 Monte Carlo Localization. Localization as state estimation.
Probabilistic Robotics
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Monte Carlo Localization for Mobile Robots Frank Dellaert 1, Dieter Fox 2, Wolfram Burgard 3, Sebastian Thrun 4 1 Georgia Institute of Technology 2 University.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Particle filters for Robot Localization An implementation of Bayes Filtering Markov Localization.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                   
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
1 Slides from D. Fox. W. Burgard, C. Stachniss, M. Bennewitz, K. Arras, S. Thrun, J. Xiao Particle Filter/Monte Carlo Localization.
Probabilistic Robotics
Probabilistic Robotics: Historgam Localization
Markov ó Kalman Filter Localization
Introduction to particle filter
State Estimation Probability, Bayes Filtering
Particle Filter/Monte Carlo Localization
CSE-490DF Robotics Capstone
Particle filters for Robot Localization
Introduction to particle filter
A Short Introduction to the Bayes Filter and Related Models
Particle Filtering.
Probabilistic Map Based Localization
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

EE-565: Mobile Robotics Non-Parametric Filters Module 2, Lecture 5 Dr Abubakr Muhammad Assistant Professor Electrical Engineering, LUMS Director, CYPHYNETS Lab http://cyphynets.lums.edu.pk

Resources Course material from Stanford CS-226 (Thrun) [slides] KAUST ME-410 (Abubakr, 2011) LUMS EE-662 (Abubakr, 2013) http://cyphynets.lums.edu.pk/index.php/Teaching Textbooks Probabilistic Robotics by Thrun et al. Principles of Robot Motion by Choset et al.

Bayesian philosophy for state estimation Part 1. Bayesian philosophy for state estimation

State Estimation Problems What is a state? Inferring “hidden states” from observations What if observations are noisy? More challenging, if state is also dynamic. Even more challenging, if the state dynamics are also noisy.

State Estimation Example: Localization Definition. Calculation of a mobile robot’s position / orientation relative to an external reference system Usually world coordinates serve as reference Basic requirement for several robot functions: approach of target points, path following avoidance of obstacles, dead-ends autonomous environment mapping Requires accurate maps !!

State Estimation Example: Mapping Objective: Store information outside of sensory horizon Map provided a-priori or can be online Types world-centric maps navigation, path planning robot-centric maps pilot tasks (e. g. collision avoidance) Problem: inaccuracy due to sensor systems Requires accurate localization!!

Probabilistic Graphical Models Estimate robot path and/or map!

Simple Example of State Estimation Suppose a robot obtains measurement z What is P(open|z)?

Bayes Formula

Causal vs. Diagnostic Reasoning P(open|z) is diagnostic. P(z|open) is causal. Often causal knowledge is easier to obtain. Bayes rule allows us to use causal knowledge:

z raises the probability that the door is open. Example P(z|open) = 0.6 P(z|open) = 0.3 P(open) = P(open) = 0.5 z raises the probability that the door is open.

Combining Evidence Suppose our robot obtains another observation z2. How can we integrate this new information? More generally, how can we estimate P(x| z1...zn )?

Recursive Bayesian Updating Markov assumption: zn is independent of z1,...,zn-1 if we know x.

Example: Second Measurement P(z2|open) = 0.5 P(z2|open) = 0.6 P(open|z1)=2/3 z2 lowers the probability that the door is open.

Probabilistic Graphical Models Sensing:

Typical Measurement Errors of an Range Measurements Beams reflected by obstacles Beams reflected by persons / caused by crosstalk Random measurements Maximum range measurements

Measured distances for expected distance of 300 cm. Raw Sensor Data Measured distances for expected distance of 300 cm. Sonar Laser

Approximation Results Laser Sonar 400cm 300cm

Often the world is dynamic since Actions Often the world is dynamic since actions carried out by the robot, actions carried out by other agents, or just the time passing by change the world. How can we incorporate such actions?

Typical Actions The robot turns its wheels to move The robot uses its manipulator to grasp an object Plants grow over time… Actions are never carried out with absolute certainty. In contrast to measurements, actions generally increase the uncertainty.

Modeling Actions To incorporate the outcome of an action u into the current “belief”, we use the conditional pdf P(x|u,x’) This term specifies the pdf that executing u changes the state from x’ to x.

Probabilistic Graphical Models Action:

Odometry Model Robot moves from to . Odometry information .

Effect of Distribution Type

Example: Closing the door

State Transitions P(x|u,x’) for u = “close door”: If the door is open, the action “close door” succeeds in 90% of all cases.

Integrating the Outcome of Actions Continuous case: Discrete case:

Example: The Resulting Belief

Bayes Filters: Framework Given: Stream of observations z and action data u: Sensor model P(z|x). Action model P(x|u,x’). Prior probability of the system state P(x). Wanted: Estimate of the state X of a dynamical system. The posterior of the state is also called Belief:

Dynamic Bayesian Network for Controls, States, and Sensations

Markov Assumption Underlying Assumptions Static world Independent noise Perfect model, no approximation errors

Bayes Filters z = observation u = action x = state Bayes Markov Total prob. Markov Markov

Bayes Filter Algorithm Algorithm Bayes_filter( Bel(x),d ): h=0 If d is a perceptual data item z then For all x do Else if d is an action data item u then Return Bel’(x)

Bayes Filters are Familiar! Kalman filters Particle filters Hidden Markov models Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs)

Bayes Filters in Localization

Summary so far …. Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. Bayes filters are a probabilistic tool for estimating the state of dynamic systems.

Parametric Vs. Non-parametric Representing distributions by using statistics or parameters (mean, variance) Non-parametric approach: Deal with distributions directly Remember: Gaussian distribution is completely parameterized by two numbers (mean, variance) Gaussian distribution remains Gaussian when mapped linearly.

Linearization

Linearization (Cont.)

Bayes Filters in Localization

Histogram = Piecewise Constant

Piecewise Constant Representation

Discrete Bayes Filter Algorithm Algorithm Discrete_Bayes_filter( Bel(x),d ): h=0 If d is a perceptual data item z then For all x do Else if d is an action data item u then Return Bel’(x)

Implementation (1) To update the belief upon sensory input and to carry out the normalization one has to iterate over all cells of the grid. Especially when the belief is peaked (which is generally the case during position tracking), one wants to avoid updating irrelevant aspects of the state space. One approach is not to update entire sub-spaces of the state space. This, however, requires to monitor whether the robot is de-localized or not. To achieve this, one can consider the likelihood of the observations given the active components of the state space.

Implementation (2) To efficiently update the belief upon robot motions, one typically assumes a bounded Gaussian model for the motion uncertainty. This reduces the update cost from O(n2) to O(n), where n is the number of states. The update can also be realized by shifting the data in the grid according to the measured motion. In a second step, the grid is then convolved using a separable Gaussian Kernel. Two-dimensional example: 1/16 1/8 1/4 1/4 1/2 +  Fewer arithmetic operations Easier to implement

Markov Localization in Grid Map

Grid-based Localization

Mathematical Description Set of weighted samples State hypothesis Importance weight The samples represent the posterior

Function Approximation Particle sets can be used to approximate functions The more particles fall into an interval, the higher the probability of that interval How to draw samples form a function/distribution?

Rejection Sampling Let us assume that f(x)<1 for all x Sample x from a uniform distribution Sample c from [0,1] if f(x) > c keep the sample otherwise reject the sampe f(x’) c c’ OK f(x) x x’

Importance Sampling Principle We can even use a different distribution g to generate samples from f By introducing an importance weight w, we can account for the “differences between g and f ” w = f / g f is often called target g is often called proposal Pre-condition: f(x)>0  g(x)>0

Importance Sampling with Resampling: Landmark Detection Example

Distributions

Wanted: samples distributed according to p(x| z1, z2, z3) Distributions Wanted: samples distributed according to p(x| z1, z2, z3)

This is Easy! We can draw samples from p(x|zl) by adding noise to the detection parameters.

Importance Sampling

Importance Sampling with Resampling Weighted samples After resampling

Particle Filters

Sensor Information: Importance Sampling

Robot Motion

Sensor Information: Importance Sampling

Robot Motion

Particle Filter Algorithm Sample the next generation for particles using the proposal distribution Compute the importance weights : weight = target distribution / proposal distribution Resampling: “Replace unlikely samples by more likely ones” [Derivation of the MCL equations in book]

Particle Filter Algorithm

Particle Filter Algorithm Importance factor for xit: draw xit from p(xt | xit-1,ut-1) draw xit-1 from Bel(xt-1)

Resampling Given: Set S of weighted samples. Wanted : Random sample, where the probability of drawing xi is given by wi. Typically done n times with replacement to generate new sample set S’.

Resampling Stochastic universal sampling Systematic resampling w2 w3 w1 wn Wn-1 w2 w3 w1 wn Wn-1 Stochastic universal sampling Systematic resampling Linear time complexity Easy to implement, low variance Roulette wheel Binary search, n log n

Also called stochastic universal sampling Resampling Algorithm Algorithm systematic_resampling(S,n): For Generate cdf Initialize threshold For Draw samples … While ( ) Skip until next threshold reached Insert Increment threshold Return S’ Also called stochastic universal sampling

Mobile Robot Localization Each particle is a potential pose of the robot Proposal distribution is the motion model of the robot (prediction step) The observation model is used to compute the importance weight (correction step)

Motion Model Start

Proximity Sensor Model Sonar sensor Laser sensor

Initial Distribution

After Incorporating Ten Ultrasound Scans

After Incorporating 65 Ultrasound Scans

Estimated Path

Using Ceiling Maps for Localization [Dellaert et al. 99]

Vision-based Localization h(x) z P(z|x)

Under a Light Measurement z: P(z|x):

Next to a Light Measurement z: P(z|x):

Elsewhere Measurement z: P(z|x):

Global Localization Using Vision

Summary – Particle Filters Particle filters are an implementation of recursive Bayesian filtering They represent the posterior by a set of weighted samples They can model non-Gaussian distributions Proposal to draw new samples Weight to account for the differences between the proposal and the target Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter

Summary – Monte Carlo Localization In the context of localization, the particles are propagated according to the motion model. They are then weighted according to the likelihood of the observations. In a re-sampling step, new particles are drawn with a probability proportional to the likelihood of the observation.