Localization and Mapping (3)

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

Mapping with Known Poses
(Includes references to Brian Clipp
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
Localization David Johnson cs6370. Basic Problem Go from thisto this.
Probabilistic Robotics: Kalman Filters
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Reliable Range based Localization and SLAM Joseph Djugash Masters Student Presenting work done by: Sanjiv Singh, George Kantor, Peter Corke and Derek Kurth.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Particle Filter/Monte Carlo Localization
Part 2 of 3: Bayesian Network and Dynamic Bayesian Network.
Monte Carlo Localization
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
Particle Filters++ TexPoint fonts used in EMF.
HCI / CprE / ComS 575: Computational Perception
ROBOT MAPPING AND EKF SLAM
Bayesian Filtering for Robot Localization
Mobile Robot controlled by Kalman Filter
Markov Localization & Bayes Filtering
/09/dji-phantom-crashes-into- canadian-lake/
Computer vision: models, learning and inference Chapter 19 Temporal models.
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
Localization and Map Building
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Probabilistic Robotics: Monte Carlo Localization
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
ECGR4161/5196 – July 26, 2011 Read Chapter 5 Exam 2 contents: Labs 0, 1, 2, 3, 4, 6 Homework 1, 2, 3, 4, 5 Book Chapters 1, 2, 3, 4, 5 All class notes.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Mobile Robot Localization (ch. 7)
1 CMPUT 412 Localization Csaba Szepesvári University of Alberta TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
Robotics Club: 5:30 this evening
CSE-473 Project 2 Monte Carlo Localization. Localization as state estimation.
Probabilistic Robotics
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
Fast SLAM Simultaneous Localization And Mapping using Particle Filter A geometric approach (as opposed to discretization approach)‏ Subhrajit Bhattacharya.
Monte Carlo Localization for Mobile Robots Frank Dellaert 1, Dieter Fox 2, Wolfram Burgard 3, Sebastian Thrun 4 1 Georgia Institute of Technology 2 University.
Probabilistic Robotics Introduction. SA-1 2 Introduction  Robotics is the science of perceiving and manipulating the physical world through computer-controlled.
10-1 Probabilistic Robotics: FastSLAM Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti, Maren Bennewitz, Christian Plagemann,
Particle filters for Robot Localization An implementation of Bayes Filtering Markov Localization.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Mobile Robotics. Fundamental Idea: Robot Pose 2D world (floor plan) 3 DOF Very simple model—the difficulty is in autonomy.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics
Markov ó Kalman Filter Localization
State Estimation Probability, Bayes Filtering
Particle Filter/Monte Carlo Localization
A Short Introduction to the Bayes Filter and Related Models
Probabilistic Map Based Localization
Non-parametric Filters: Particle Filters
Non-parametric Filters: Particle Filters
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

Localization and Mapping (3) Advanced Mobile Robotics Localization and Mapping (3) Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York jxiao@ccny.cuny.edu

Outline Localization Methods Current Research Topics Project Exercise Markov Localization (review) Kalman Filter Localization Particle Filter (Monte Carlo Localization) Current Research Topics SLAM Multi-robot Localization Project Exercise

Review: Get Started Fundamental problems to provide a mobile robot with autonomous capabilities: how to create an environmental map with imperfect sensors? how a robot can tell where it is on a map? what if you’re lost and don’t have a map? mapping localization robot SLAM

Representation of the Environment Environment Representation Continuos Metric ® x,y,q Discrete Metric ® metric grid Discrete Topological ® topological grid

Localization, Where am I? Perception Odometry, Dead Reckoning Localization base on external sensors, beacons or landmarks Probabilistic Map Based Localization

Localization Methods Markov Localization: Kalman Filtering Central idea: represent the robot’s belief by a probability distribution over possible positions, and uses Bayes’ rule and convolution to update the belief whenever the robot senses or moves Markov Assumption: past and future data are independent if one knows the current state Kalman Filtering Central idea: posing localization problem as a sensor fusion problem Assumption: Gaussian distribution function Particle Filtering Monte-Carlo method SLAM (simultaneous localization and mapping) Multi-robot localization

Markov Localization Applying probability theory to robot localization Markov localization uses an explicit, discrete representation for the probability of all position in the state space. This is usually done by representing the environment by a grid or a topological graph with a finite number of possible states (positions). During each update, the probability for each state (element) of the entire space is updated.

Markov Localization Example Assume the robot position is one- dimensional The robot is placed somewhere in the environment but it is not told its location The robot queries its sensors and finds out it is next to a door

Markov Localization Example The robot moves one meter forward. To account for inherent noise in robot motion the new belief is smoother The robot queries its sensors and again it finds itself next to a door

Basic Notation Bel(Lt=l ) Is the probability (density) that the robot assigns to the possibility that its location at time t is l The belief is updated in response to two different types of events: • sensor readings (SEE) • odometry data (ACT)

Notation Bayes’ Rule: Goal:

Markov assumption (or static world assumption)

Markov Localization

Update Phase (SEE) a b c

Update Phase (SEE)

Prediction Phase (ACT) Map from a belief state and an action to new belief state (ACT): Summing over all possible ways in which the robot may have reached .

Summary

Markov Localization: Case Study The Dervish Robot Topological Localization with Sonar

Markov Localization: Case Study Topological map of office-type environment

Markov Localization: Case Study Update of believe state for position n given the percept-pair i p(n¦i): new likelihood for being in position n p(n): current believe state p(i¦n): probability of seeing i in n (see table) No action update ! (no encoder installed) However, the robot is moving and therefore we can apply a combination of action and perception update t-i is used instead of t-1 because the topological distance between n’ and n can very depending on the specific topological map

Markov Localization: Case Study The calculation is calculated by multiplying the probability of generating perceptual event i at position n by the probability of having failed to generate perceptual event s at all nodes between n’ and n.

Markov Localization: Case Study Example calculation Assume that the robot has two nonzero belief states p(1-2) = 1.0 ; p(2-3) = 0.2 * at that it is facing east with certainty State 2-3 will progress potentially to 3 and 3-4 to 4. State 3 and 3-4 can be eliminated because the likelihood of detecting an open door is zero. The likelihood of reaching state 4 is the product of the initial likelihood p(2-3)= 0.2, (a) the likelihood of detecting anything at node 3 and the likelihood of detecting a hallway on the left and a door on the right at node 4 and (b) the likelihood of detecting a hallway on the left and a door on the right at node 4. (for simplicity we assume that the likelihood of detecting nothing at node 3-4 is 1.0) This leads to: 0.2 × [0.6 × 0.4 + 0.4 × 0.05] × 0.7 × [0.9 × 0.1] ® p(4) = 0.003. Similar calculation for progress from 1-2 ® p(2) = 0.3. * Note that the probabilities do not sum up to one. For simplicity normalization was avoided in this example

Kalman Filter Localization

Introduction to Kalman Filter (1) Two measurements Weighted leas-square Finding minimum error After some calculation and rearrangements Gaussian probability density function Take weight as: Best estimate for the robot position

Introduction to Kalman Filter (2) In Kalman Filter notation Best estimate of the state at time K+1 is equal to the best prediction of value before the new measurement is taken, plus a weighting of value times the difference between and the best prediction at time k

Introduction to Kalman Filter (3) Dynamic Prediction (robot moving) u = velocity w = noise Motion Combining fusion and dynamic prediction

Kalman Filter for Mobile Robot Localization Five steps: 1) Position Prediction, 2) Observation, 3) Measurement prediction, 4) Matching, 5) Estimation

Kalman Filter for Mobile Robot Localization Robot Position Prediction In the first step, the robots position at time step k+1 is predicted based on its old location (time step k) and its movement due to the control input u(k): f: Odometry function

Robot Position Prediction: Example Odometry

Kalman Filter for Mobile Robot Localization Observation The second step it to obtain the observation Z(k+1) (measurements) from the robot’s sensors at the new location at time k+1 The observation usually consists of a set n0 of single observations zj(k+1) extracted from the different sensors signals. It can represent raw data scans as well as features like lines, doors or any kind of landmarks. The parameters of the targets are usually observed in the sensor frame {S}. Therefore the observations have to be transformed to the world frame {W} or the measurement prediction have to be transformed to the sensor frame {S}. This transformation is specified in the function hi (seen later).

Raw Date of Laser Scanner Observation: Example Raw Date of Laser Scanner Extracted Lines Extracted Lines in Model Space Sensor (robot) frame

Kalman Filter for Mobile Robot Localization Measurement Prediction In the next step we use the predicted robot position and the map M(k) to generate multiple predicted observations zt. They have to be transformed into the sensor frame We can now define the measurement prediction as the set containing all ni predicted observations The function hi is mainly the coordinate transformation between the world frame and the sensor frame

Measurement Prediction: Example For prediction, only the walls that are in the field of view of the robot are selected. This is done by linking the individual lines to the nodes of the path

Measurement Prediction: Example The generated measurement predictions have to be transformed to the robot frame {R} According to the figure in previous slide the transformation is given by and its Jacobian by

Kalman Filter for Mobile Robot Localization Matching Assignment from observations zj(k+1) (gained by the sensors) to the targets zt (stored in the map) For each measurement prediction for which an corresponding observation is found we calculate the innovation: and its innovation covariance found by applying the error propagation law: The validity of the correspondence between measurement and prediction can e.g. be evaluated through the Mahalanobis distance:

Matching: Example

Matching: Example To find correspondence (pairs) of predicted and observed features we use the Mahalanobis distance with

Estimation: Applying the Kalman Filter Kalman filter gain: Update of robot’s position estimate: The associate variance

Estimation: 1D Case For the one-dimensional case with we can show that the estimation corresponds to the Kalman filter for one-dimension presented earlier.

Estimation: Example Kalman filter estimation of the new robot position : By fusing the prediction of robot position (magenta) with the innovation gained by the measurements (green) we get the updated estimate of the robot position (red)

Monte Carlo Localization One of the most popular particle filter method for robot localization Current Research Topics Reference: Dieter Fox, Wolfram Burgard, Frank Dellaert, Sebastian Thrun, “Monte Carlo Localization: Efficient Position Estimation for Mobile Robots”, Proc. 16th National Conference on Artificial Intelligence, AAAI’99, July 1999

MCL in action “Monte Carlo” Localization -- refers to the resampling of the distribution each time a new observation is integrated

Monte Carlo Localization the probability density function is represented by samples randomly drawn from it it is also able to represent multi-modal distributions, and thus localize the robot globally considerably reduces the amount of memory required and can integrate measurements at a higher rate state is not discretized and the method is more accurate than the grid-based methods easy to implement

“Probabilistic Robotics” - Sebastian Thrun p( B | A ) • p( A ) Bayes’ rule p( A | B ) = p( B ) Definition of marginal probability S p( A ) = p( A  B ) all B S p( A ) = p( A | B ) • p(B) all B Definition of conditional probability

Setting up the problem The robot does (or can be modeled to) alternate between sensing -- getting range observations o1, o2, o3, …, ot-1, ot acting -- driving around (or ferrying?) a1, a2, a3, …, at-1 “local maps” whence?

Setting up the problem The robot does (or can be modeled to) alternate between sensing -- getting range observations o1, o2, o3, …, ot-1, ot acting -- driving around (or ferrying?) a1, a2, a3, …, at-1 We want to know rt -- the position of the robot at time t but we’ll settle for p(rt) -- the probability distribution for rt ! What kind of thing is p(rt) ?

Setting up the problem The robot does (or can be modeled to) alternate between sensing -- getting range observations o1, o2, o3, …, ot-1, ot acting -- driving around (or ferrying?) a1, a2, a3, …, at-1 We want to know rt -- the position of the robot at time t but we’ll settle for p(rt) -- the probability distribution for rt ! What kind of thing is p(rt) ? We do know m -- the map of the environment p( o | r, m ) -- the sensor model p( rnew | rold, a, m ) -- the accuracy of desired action a

potential observations o Robot modeling map m and location r p( o | r, m ) sensor model p( | r, m ) = .75 p( | r, m ) = .05 potential observations o

Robot modeling p( o | r, m ) sensor model map m and location r p( o | r, m ) sensor model p( rnew | rold, a, m ) action model p( | r, m ) = .75 p( | r, m ) = .05 potential observations o “probabilistic kinematics” -- encoder uncertainty red lines indicate commanded action the cloud indicates the likelihood of various final states

Robot modeling: how-to p( o | r, m ) sensor model p( rnew | rold, a, m ) action model (0) Model the physics of the sensor/actuators (with error estimates) theoretical modeling (1) Measure lots of sensing/action results and create a model from them empirical modeling take N measurements, find mean (m) and st. dev. (s) and then use a Gaussian model or, some other easily-manipulated model... 0 if |x-m| > s 0 if |x-m| > s p( x ) = p( x ) = 1 otherwise 1- |x-m|/s otherwise (2) Make something up...

Monte Carlo Localization Start by assuming p( r0 ) is the uniform distribution. take K samples of r0 and weight each with an importance factor of 1/K

Monte Carlo Localization Start by assuming p( r0 ) is the uniform distribution. take K samples of r0 and weight each with an importance factor of 1/K Get the current sensor observation, o1 For each sample point r0 multiply the importance factor by p(o1 | r0, m)

Monte Carlo Localization Start by assuming p( r0 ) is the uniform distribution. take K samples of r0 and weight each with an importance factor of 1/K Get the current sensor observation, o1 For each sample point r0 multiply the importance factor by p(o1 | r0, m) Normalize (make sure the importance factors add to 1) You now have an approximation of p(r1 | o1, …, m) and the distribution is no longer uniform

Monte Carlo Localization Start by assuming p( r0 ) is the uniform distribution. take K samples of r0 and weight each with an importance factor of 1/K Get the current sensor observation, o1 For each sample point r0 multiply the importance factor by p(o1 | r0, m) Normalize (make sure the importance factors add to 1) You now have an approximation of p(r1 | o1, …, m) and the distribution is no longer uniform Create r1 samples by dividing up large clumps each point spawns new ones in proportion to its importance factor

Monte Carlo Localization Start by assuming p( r0 ) is the uniform distribution. take K samples of r0 and weight each with an importance factor of 1/K Get the current sensor observation, o1 For each sample point r0 multiply the importance factor by p(o1 | r0, m) Normalize (make sure the importance factors add to 1) You now have an approximation of p(r1 | o1, …, m) and the distribution is no longer uniform Create r1 samples by dividing up large clumps each point spawns new ones in proportion to its importance factor The robot moves, a1 For each sample r1, move it according to the model p(r2 | a1, r1, m)

Monte Carlo Localization Start by assuming p( r0 ) is the uniform distribution. take K samples of r0 and weight each with an importance factor of 1/K Get the current sensor observation, o1 For each sample point r0 multiply the importance factor by p(o1 | r0, m) Normalize (make sure the importance factors add to 1) You now have an approximation of p(r1 | o1, …, m) and the distribution is no longer uniform Create r1 samples by dividing up large clumps each point spawns new ones in proportion to its importance factor The robot moves, a1 For each sample r1, move it according to the model p(r2 | a1, r1, m)

MCL in action “Monte Carlo” Localization -- refers to the resampling of the distribution each time a new observation is integrated

Discussion: MCL

References Dieter Fox, Wolfram Burgard, Frank Dellaert, Sebastian Thrun, “Monte Carlo Localization: Efficient Position Estimation for Mobile Robots”, Proc. 16th National Conference on Artificial Intelligence, AAAI’99, July 1999 Dieter Fox, Wolfram Burgard, Sebastian Thrun, “Markov Localization for Mobile Robots in Dynamic Environments”, J. of Artificial Intelligence Research 11 (1999) 391-427 Sebastian Thrun, “Probabilistic Algorithms in Robotics”, Technical Report CMU-CS-00-126, School of Computer Science, Carnegie Mellon University, Pittsburgh, USA, 2000