Localization and Map Building

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

Odometry Error Modeling Three noble methods to model random odometry error.
Odometry Error Detection & Correction - Sudhan Kanitkar.
Mapping with Known Poses
Probabilistic Robotics
(Includes references to Brian Clipp
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Probabilistic Robotics
Bayesian Robot Programming & Probabilistic Robotics Pavel Petrovič Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
1 Observers Data Only Fault Detection Bo Wahlberg Automatic Control Lab & ACCESS KTH, SWEDEN André C. Bittencourt Department of Automatic Control UFSC,
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Reliable Range based Localization and SLAM Joseph Djugash Masters Student Presenting work done by: Sanjiv Singh, George Kantor, Peter Corke and Derek Kurth.
Robotic Mapping: A Survey Sebastian Thrun, 2002 Presentation by David Black-Schaffer and Kristof Richmond.
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Monte Carlo Localization
1 CMPUT 412 Autonomous Map Building Csaba Szepesvári University of Alberta TexPoint fonts used in EMF. Read the TexPoint manual before you delete this.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Planetary Surface Robotics ENAE 788U, Spring 2005 U N I V E R S I T Y O F MARYLAND Lecture 8 Mapping 5 April, 2005.
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
1/53 Key Problems Localization –“where am I ?” Fault Detection –“what’s wrong ?” Mapping –“what is my environment like ?”
Overview and Mathematics Bjoern Griesbach
Particle Filters++ TexPoint fonts used in EMF.
Bayesian Filtering for Robot Localization
Mobile Robot controlled by Kalman Filter
Markov Localization & Bayes Filtering
Localization and Mapping (3)
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
ECGR4161/5196 – July 26, 2011 Read Chapter 5 Exam 2 contents: Labs 0, 1, 2, 3, 4, 6 Homework 1, 2, 3, 4, 5 Book Chapters 1, 2, 3, 4, 5 All class notes.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
TKK | Automation Technology Laboratory Partially Observable Markov Decision Process (Chapter 15 & 16) José Luis Peralta.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Mobile Robot Localization (ch. 7)
1 CMPUT 412 Localization Csaba Szepesvári University of Alberta TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
Robotics Club: 5:30 this evening
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Probabilistic Robotics Introduction.  Robotics is the science of perceiving and manipulating the physical world through computer-controlled devices.
Probabilistic Robotics
Fast SLAM Simultaneous Localization And Mapping using Particle Filter A geometric approach (as opposed to discretization approach)‏ Subhrajit Bhattacharya.
Probabilistic Robotics Introduction. SA-1 2 Introduction  Robotics is the science of perceiving and manipulating the physical world through computer-controlled.
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
CS b659: Intelligent Robotics
Schedule for next 2 weeks
Probabilistic Robotics
Simultaneous Localization and Mapping
Markov ó Kalman Filter Localization
A Short Introduction to the Bayes Filter and Related Models
Probabilistic Map Based Localization
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

Localization and Map Building 5 Localization and Map Building Noise and aliasing; odometric position estimation To localize or not to localize Belief representation Map representation Probabilistic map-based localization Other examples of localization system Autonomous map building "Position" Global Map Perception Motion Control Cognition Real World Environment Localization Path Environment Model Local Map

Localization, Where am I? 5.1 Localization, Where am I? Perception Odometry, Dead Reckoning Localization base on external sensors, beacons or landmarks Probabilistic Map Based Localization

Challenges of Localization 5.2 Challenges of Localization Knowing the absolute position (e.g. GPS) is not sufficient Localization in human-scale in relation with environment Planing in the Cognition step requires more than only position as input Perception and motion plays an important role Sensor noise Sensor aliasing Effector noise Odometric position estimation

5.2.1 Sensor Noise Sensor noise in mainly influenced by environment e.g. surface, illumination … or by the measurement principle itself e.g. interference between ultrasonic sensors Sensor noise drastically reduces the useful information of sensor readings. The solution is: to take multiple reading into account employ temporal and/or multi-sensor fusion

5.2.2 Sensor Aliasing In robots, non-uniqueness of sensors readings is the norm Even with multiple sensors, there is a many-to-one mapping from environmental states to robot’s perceptual inputs Therefore the amount of information perceived by the sensors is generally insufficient to identify the robot’s position from a single reading Robot’s localization is usually based on a series of readings Sufficient information is recovered by the robot over time

Effector Noise: Odometry, Dead Reckoning 5.2.3 Effector Noise: Odometry, Dead Reckoning Odometry and dead reckoning: Position update is based on proprioceptive sensors Odometry: wheel sensors only Dead reckoning: also heading sensors The movement of the robot, sensed with wheel encoders and/or heading sensors is integrated to the position. Pros: Straight forward, easy Cons: Errors are integrated -> unbound Using additional heading sensors (e.g. gyroscope) might help to reduce the cumulated errors, but the main problems remain the same.

Odometry: Error sources 5.2.3 Odometry: Error sources deterministic non-deterministic (systematic) (non-systematic) deterministic errors can be eliminated by proper calibration of the system. non-deterministic errors have to be described by error models and will always leading to uncertain position estimate. Major Error Sources: Limited resolution during integration (time increments, measurement resolution …) Misalignment of the wheels (deterministic) Unequal wheel diameter (deterministic) Variation in the contact point of the wheel Unequal floor contact (slipping, not planar …) …

Odometry: Classification of Integration Errors 5.2.3 Odometry: Classification of Integration Errors Range error: integrated path length (distance) of the robots movement sum of the wheel movements Turn error: similar to range error, but for turns difference of the wheel motions Drift error: difference in the error of the wheels leads to an error in the robots angular orientation Over long periods of time, turn and drift errors far outweigh range errors! Consider moving forward on a straight line along the x axis. The error in the y-position introduced by a move of d meters will have a component of dsinDq, which can be quite large as the angular error Dq grows.

Odometry: The Differential Drive Robot (1) 5.2.4 Odometry: The Differential Drive Robot (1)

Odometry: The Differential Drive Robot (2) 5.2.4 Odometry: The Differential Drive Robot (2) Kinematics

Odometry: The Differential Drive Robot (3) 5.2.4 Odometry: The Differential Drive Robot (3) Error model

Odometry: Growth of Pose uncertainty for Straight Line Movement 5.2.4 Odometry: Growth of Pose uncertainty for Straight Line Movement Note: Errors perpendicular to the direction of movement are growing much faster!

Odometry: Growth of Pose uncertainty for Movement on a Circle 5.2.4 Odometry: Growth of Pose uncertainty for Movement on a Circle Note: Errors ellipse in does not remain perpendicular to the direction of movement!

Odometry: Calibration of Errors I (Borenstein [5]) 5.2.4 Odometry: Calibration of Errors I (Borenstein [5]) The unidirectional square path experiment BILD 1 Borenstein

Odometry: Calibration of Errors II (Borenstein [5]) 5.2.4 Odometry: Calibration of Errors II (Borenstein [5]) The bi-directional square path experiment BILD 2/3 Borenstein

Odometry: Calibration of Errors III (Borenstein [5]) 5.2.4 Odometry: Calibration of Errors III (Borenstein [5]) The deterministic and non-deterministic errors

To localize or not? 5.3 How to navigate between A and B navigation without hitting obstacles detection of goal location Possible by following always the left wall However, how to detect that the goal is reached

Behavior Based Navigation 5.3 Behavior Based Navigation

Model Based Navigation 5.3 Model Based Navigation

Belief Representation 5.4 Belief Representation a) Continuous map with single hypothesis b) Continuous map with multiple hypothesis d) Discretized map with probability distribution d) Discretized topological map with probability distribution

Belief Representation: Characteristics 5.4 Belief Representation: Characteristics Continuous Precision bound by sensor data Typically single hypothesis pose estimate Lost when diverging (for single hypothesis) Compact representation and typically reasonable in processing power. Discrete Precision bound by resolution of discretisation Typically multiple hypothesis pose estimate Never lost (when diverges converges to another cell) Important memory and processing power needed. (not the case for topological maps)

Bayesian Approach: A taxonomy of probabilistic models 5.4 Bayesian Approach: A taxonomy of probabilistic models More general Courtesy of Julien Diard Bayesian Programs S: State O: Observation A: Action Bayesian Networks DBNs St St-1 St St-1 At St St-1 Ot St St-1 Ot At Markov Chains Bayesian Filters Markov Loc MDPs Particle Filters discrete HMMs semi-cont. HMMs continuous HMMs MCML POMDPs Kalman Filters More specific

Single-hypothesis Belief – Continuous Line-Map 5.4.1 Single-hypothesis Belief – Continuous Line-Map

Single-hypothesis Belief – Grid and Topological Map 5.4.1 Single-hypothesis Belief – Grid and Topological Map

Grid-base Representation - Multi Hypothesis 5.4.2 Grid-base Representation - Multi Hypothesis Grid size around 20 cm2. Courtesy of W. Burgard

Map Representation 5.5 Map precision vs. application Features precision vs. map precision Precision vs. computational complexity Continuous Representation Decomposition (Discretization)

Representation of the Environment 5.5 Representation of the Environment Environment Representation Continuos Metric ® x,y,q Discrete Metric ® metric grid Discrete Topological ® topological grid Environment Modeling Raw sensor data, e.g. laser range data, grayscale images large volume of data, low distinctiveness on the level of individual values makes use of all acquired information Low level features, e.g. line other geometric features medium volume of data, average distinctiveness filters out the useful information, still ambiguities High level features, e.g. doors, a car, the Eiffel tower low volume of data, high distinctiveness filters out the useful information, few/no ambiguities, not enough information

Map Representation: Continuous Line-Based 5.5.1 Map Representation: Continuous Line-Based Architecture map Representation with set of infinite lines

Map Representation: Decomposition (1) 5.5.2 Map Representation: Decomposition (1) Exact cell decomposition

Map Representation: Decomposition (2) 5.5.2 Map Representation: Decomposition (2) Fixed cell decomposition Narrow passages disappear

Map Representation: Decomposition (3) 5.5.2 Map Representation: Decomposition (3) Adaptive cell decomposition

Map Representation: Decomposition (4) 5.5.2 Map Representation: Decomposition (4) Fixed cell decomposition – Example with very small cells Courtesy of S. Thrun

Map Representation: Decomposition (5) 5.5.2 Map Representation: Decomposition (5) Topological Decomposition

Map Representation: Decomposition (6) 5.5.2 Map Representation: Decomposition (6) Topological Decomposition node Connectivity (arch)

Map Representation: Decomposition (7) 5.5.2 Map Representation: Decomposition (7) Topological Decomposition

State-of-the-Art: Current Challenges in Map Representation 5.5.3 State-of-the-Art: Current Challenges in Map Representation Real world is dynamic Perception is still a major challenge Error prone Extraction of useful information difficult Traversal of open space How to build up topology (boundaries of nodes) Sensor fusion …

Probabilistic, Map-Based Localization (1) 5.6.1 Probabilistic, Map-Based Localization (1) Consider a mobile robot moving in a known environment. As it start to move, say from a precisely known location, it might keep track of its location using odometry. However, after a certain movement the robot will get very uncertain about its position. è update using an observation of its environment. observation lead also to an estimate of the robots position which can than be fused with the odometric estimation to get the best possible update of the robots actual position.

Probabilistic, Map-Based Localization (2) 5.6.1 Probabilistic, Map-Based Localization (2) Action update action model ACT with ot: Encoder Measurement, st-1: prior belief state increases uncertainty Perception update perception model SEE with it: exteroceptive sensor inputs, s’1: updated belief state decreases uncertainty

5.6.1 Improving belief state by moving

Probabilistic, Map-Based Localization (3) 5.6.1 Probabilistic, Map-Based Localization (3) Given the position estimate its covariance for time k, the current control input the current set of observations and the map Compute the new (posteriori) position estimate and its covariance Such a procedure usually involves five steps:

The Five Steps for Map-Based Localization 5.6.1 The Five Steps for Map-Based Localization Observation on-board sensors Map data base Prediction of Measurement and Position (odometry) P e r c p t i o n Matching Estimation (fusion) raw sensor data or extracted features d f a u b s v position estimate matched predictions and observations YES Encoder 1. Prediction based on previous estimate and odometry 2. Observation with on-board sensors 3. Measurement prediction based on prediction and map 4. Matching of observation and map 5. Estimation -> position update (posteriori position)

Markov ó Kalman Filter Localization 5.6.1 Markov ó Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole state space at any time requires a discrete representation of the space (grid). The required memory and calculation power can thus become very important if a fine grid is used. Kalman filter localization tracks the robot and is inherently very precise and efficient. However, if the uncertainty of the robot becomes to large (e.g. collision with an object) the Kalman filter will fail and the position is definitively lost.

Markov Localization (1) 5.6.2 Markov Localization (1) Markov localization uses an explicit, discrete representation for the probability of all position in the state space. This is usually done by representing the environment by a grid or a topological graph with a finite number of possible states (positions). During each update, the probability for each state (element) of the entire space is updated.

5.6.2 Markov Localization (2): Applying probabilty theory to robot localization P(A): Probability that A is true. e.g. p(rt = l): probability that the robot r is at position l at time t We wish to compute the probability of each indivitual robot position given actions and sensor measures. P(A|B): Conditional probability of A given that we know B. e.g. p(rt = l| it): probability that the robot is at position l given the sensors input it. Product rule: Bayes rule:

5.6.2 Markov Localization (3): Applying probability theory to robot localization Bayes rule: Map from a belief state and a sensor input to a refined belief state (SEE): p(l): belief state before perceptual update process p(i |l): probability to get measurement i when being at position l consult robots map, identify the probability of a certain sensor reading for each possible position in the map p(i): normalization factor so that sum over all l for L equals 1.

5.6.2 Markov Localization (4): Applying probability theory to robot localization Bayes rule: Map from a belief state and a action to new belief state (ACT): Summing over all possible ways in which the robot may have reached l. Markov assumption: Update only depends on previous state and its most recent actions and perception.

Markov Localization: Case Study 1 - Topological Map (1) 5.6.2 Markov Localization: Case Study 1 - Topological Map (1) The Dervish Robot Topological Localization with Sonar

Markov Localization: Case Study 1 - Topological Map (2) 5.6.2 Markov Localization: Case Study 1 - Topological Map (2) Topological map of office-type environment

Markov Localization: Case Study 1 - Topological Map (3) 5.6.2 Markov Localization: Case Study 1 - Topological Map (3) Update of believe state for position n given the percept-pair i p(n¦i): new likelihood for being in position n p(n): current believe state p(i¦n): probability of seeing i in n (see table) No action update ! However, the robot is moving and therefore we can apply a combination of action and perception update t-i is used instead of t-1 because the topological distance between n’ and n can very depending on the specific topological map

Markov Localization: Case Study 1 - Topological Map (4) 5.6.2 Markov Localization: Case Study 1 - Topological Map (4) The calculation is calculated by multiplying the probability of generating perceptual event i at position n by the probability of having failed to generate perceptual event s at all nodes between n’ and n.

Markov Localization: Case Study 1 - Topological Map (5) 5.6.2 Markov Localization: Case Study 1 - Topological Map (5) Example calculation Assume that the robot has two nonzero belief states p(1-2) = 1.0 ; p(2-3) = 0.2 * at that it is facing east with certainty State 2-3 will progress potentially to 3 and 3-4 to 4. State 3 and 3-4 can be eliminated because the likelihood of detecting an open door is zero. The likelihood of reaching state 4 is the product of the initial likelihood p(2-3)= 0.2, (a) the likelihood of detecting anything at node 3 and the likelihood of detecting a hallway on the left and a door on the right at node 4 and (b) the likelihood of detecting a hallway on the left and a door on the right at node 4. (for simplicity we assume that the likelihood of detecting nothing at node 3-4 is 1.0) This leads to: 0.2 × [0.6 × 0.4 + 0.4 × 0.05] × 0.7 × [0.9 × 0.1] ® p(4) = 0.003. Similar calculation for progress from 1-2 ® p(2) = 0.3. * Note that the probabilities do not sum up to one. For simplicity normalization was avoided in this example

Markov Localization: Case Study 2 – Grid Map (1) 5.6.2 Markov Localization: Case Study 2 – Grid Map (1) Fine fixed decomposition grid (x, y, q), 15 cm x 15 cm x 1° Action and perception update Action update: Sum over previous possible positions and motion model Discrete version of eq. 5.22 Perception update: Given perception i, what is the probability to be a location l Courtesy of W. Burgard

Markov Localization: Case Study 2 – Grid Map (2) 5.6.2 Markov Localization: Case Study 2 – Grid Map (2) The critical challenge is the calculation of p(i¦l) The number of possible sensor readings and geometric contexts is extremely large p(i¦l) is computed using a model of the robot’s sensor behavior, its position l, and the local environment metric map around l. Assumptions Measurement error can be described by a distribution with a mean Non-zero chance for any measurement Courtesy of W. Burgard

Markov Localization: Case Study 2 – Grid Map (3) 5.6.2 Markov Localization: Case Study 2 – Grid Map (3) The 1D case Start No knowledge at start, thus we have an uniform probability distribution. Robot perceives first pillar Seeing only one pillar, the probability being at pillar 1, 2 or 3 is equal. Robot moves Action model enables to estimate the new probability distribution based on the previous one and the motion. Robot perceives second pillar Base on all prior knowledge the probability being at pillar 2 becomes dominant

Markov Localization: Case Study 2 – Grid Map (4) 5.6.2 Markov Localization: Case Study 2 – Grid Map (4) Example 1: Office Building Courtesy of W. Burgard Position 5 Position 3 Position 4

Markov Localization: Case Study 2 – Grid Map (5) 5.6.2 Markov Localization: Case Study 2 – Grid Map (5) Courtesy of W. Burgard Example 2: Museum Laser scan 1

Markov Localization: Case Study 2 – Grid Map (6) 5.6.2 Markov Localization: Case Study 2 – Grid Map (6) Courtesy of W. Burgard Example 2: Museum Laser scan 2

Markov Localization: Case Study 2 – Grid Map (7) 5.6.2 Markov Localization: Case Study 2 – Grid Map (7) Courtesy of W. Burgard Example 2: Museum Laser scan 3

Markov Localization: Case Study 2 – Grid Map (8) 5.6.2 Markov Localization: Case Study 2 – Grid Map (8) Courtesy of W. Burgard Example 2: Museum Laser scan 13

Markov Localization: Case Study 2 – Grid Map (9) 5.6.2 Markov Localization: Case Study 2 – Grid Map (9) Courtesy of W. Burgard Example 2: Museum Laser scan 21

Markov Localization: Case Study 2 – Grid Map (10) 5.6.2 Markov Localization: Case Study 2 – Grid Map (10) Fine fixed decomposition grids result in a huge state space Very important processing power needed Large memory requirement Reducing complexity Various approached have been proposed for reducing complexity The main goal is to reduce the number of states that are updated in each step Randomized Sampling / Particle Filter Approximated belief state by representing only a ‘representative’ subset of all states (possible locations) E.g update only 10% of all possible locations The sampling process is typically weighted, e.g. put more samples around the local peaks in the probability density function However, you have to ensure some less likely locations are still tracked, otherwise the robot might get lost

Kalman Filter Localization 5.6.3 Kalman Filter Localization

Introduction to Kalman Filter (1) 5.6.3 Introduction to Kalman Filter (1) Two measurements Weighted leas-square Finding minimum error After some calculation and rearrangements

Introduction to Kalman Filter (2) 5.6.3 Introduction to Kalman Filter (2) In Kalman Filter notation

Introduction to Kalman Filter (3) 5.6.3 Introduction to Kalman Filter (3) Dynamic Prediction (robot moving) u = velocity w = noise Motion Combining fusion and dynamic prediction

Kalman Filter for Mobile Robot Localization Robot Position Prediction 5.6.3 Kalman Filter for Mobile Robot Localization Robot Position Prediction In a first step, the robots position at time step k+1 is predicted based on its old location (time step k) and its movement due to the control input u(k): f: Odometry function

5.6.3 Kalman Filter for Mobile Robot Localization Robot Position Prediction: Example Odometry

Kalman Filter for Mobile Robot Localization Observation 5.6.3 Kalman Filter for Mobile Robot Localization Observation The second step it to obtain the observation Z(k+1) (measurements) from the robot’s sensors at the new location at time k+1 The observation usually consists of a set n0 of single observations zj(k+1) extracted from the different sensors signals. It can represent raw data scans as well as features like lines, doors or any kind of landmarks. The parameters of the targets are usually observed in the sensor frame {S}. Therefore the observations have to be transformed to the world frame {W} or the measurement prediction have to be transformed to the sensor frame {S}. This transformation is specified in the function hi (seen later).

Kalman Filter for Mobile Robot Localization Observation: Example 5.6.3 Kalman Filter for Mobile Robot Localization Observation: Example Raw Date of Laser Scanner Extracted Lines Extracted Lines in Model Space Sensor (robot) frame

Kalman Filter for Mobile Robot Localization Measurement Prediction 5.6.3 Kalman Filter for Mobile Robot Localization Measurement Prediction In the next step we use the predicted robot position and the map M(k) to generate multiple predicted observations zt. They have to be transformed into the sensor frame We can now define the measurement prediction as the set containing all ni predicted observations The function hi is mainly the coordinate transformation between the world frame and the sensor frame

This is done by linking the individual lines to the nodes of the path 5.6.3 Kalman Filter for Mobile Robot Localization Measurement Prediction: Example For prediction, only the walls that are in the field of view of the robot are selected. This is done by linking the individual lines to the nodes of the path

5.6.3 Kalman Filter for Mobile Robot Localization Measurement Prediction: Example The generated measurement predictions have to be transformed to the robot frame {R} According to the figure in previous slide the transformation is given by and its Jacobian by

Kalman Filter for Mobile Robot Localization Matching 5.6.3 Kalman Filter for Mobile Robot Localization Matching Assignment from observations zj(k+1) (gained by the sensors) to the targets zt (stored in the map) For each measurement prediction for which an corresponding observation is found we calculate the innovation: and its innovation covariance found by applying the error propagation law: The validity of the correspondence between measurement and prediction can e.g. be evaluated through the Mahalanobis distance:

Kalman Filter for Mobile Robot Localization Matching: Example 5.6.3 Kalman Filter for Mobile Robot Localization Matching: Example

Kalman Filter for Mobile Robot Localization Matching: Example 5.6.3 Kalman Filter for Mobile Robot Localization Matching: Example To find correspondence (pairs) of predicted and observed features we use the Mahalanobis distance with

Update of robot’s position estimate: 5.6.3 Kalman Filter for Mobile Robot Localization Estimation: Applying the Kalman Filter Kalman filter gain: Update of robot’s position estimate: The associate variance

Kalman Filter for Mobile Robot Localization Estimation: 1D Case 5.6.3 Kalman Filter for Mobile Robot Localization Estimation: 1D Case For the one-dimensional case with we can show that the estimation corresponds to the Kalman filter for one-dimension presented earlier.

Kalman Filter for Mobile Robot Localization Estimation: Example 5.6.3 Kalman Filter for Mobile Robot Localization Estimation: Example Kalman filter estimation of the new robot position : By fusing the prediction of robot position (magenta) with the innovation gained by the measurements (green) we get the updated estimate of the robot position (red)

Autonomous Indoor Navigation (Pygmalion EPFL) 5.6.3 Autonomous Indoor Navigation (Pygmalion EPFL) very robust on-the-fly localization one of the first systems with probabilistic sensor fusion 47 steps,78 meter length, realistic office environment, conducted 16 times > 1km overall distance partially difficult surfaces (laser), partially few vertical edges (vision)

5.7.1 Other Localization Methods (not probabilistic) Localization Baseon Artificial Landmarks

5.7.1 Other Localization Methods (not probabilistic) Localization Base on Artificial Landmarks

5.7.1 Other Localization Methods (not probabilistic) Localization Base on Artificial Landmarks

5.7.3 Other Localization Methods (not probabilistic) Positioning Beacon Systems: Triangulation

5.7.3 Other Localization Methods (not probabilistic) Positioning Beacon Systems: Triangulation

5.7.3 Other Localization Methods (not probabilistic) Positioning Beacon Systems: Triangulation

5.7.3 Other Localization Methods (not probabilistic) Positioning Beacon Systems: Docking

5.7.3 Other Localization Methods (not probabilistic) Positioning Beacon Systems: Bar-Code

5.7.3 Other Localization Methods (not probabilistic) Positioning Beacon Systems

Autonomous Map Building 5.8 Autonomous Map Building Starting from an arbitrary initial point, a mobile robot should be able to autonomously explore the environment with its on board sensors, gain knowledge about it, interpret the scene, build an appropriate map and localize itself relative to this map. SLAM The Simultaneous Localization and Mapping Problem

Map Building: How to Establish a Map 5.8 Map Building: How to Establish a Map 1. By Hand 2. Automatically: Map Building The robot learns its environment Motivation: - by hand: hard and costly - dynamically changing environment - different look due to different perception 3. Basic Requirements of a Map: a way to incorporate newly sensed information into the existing world model information and procedures for estimating the robot’s position information to do path planning and other navigation task (e.g. obstacle avoidance) Measure of Quality of a map topological correctness metrical correctness But: Most environments are a mixture of predictable and unpredictable features  hybrid approach model-based vs. behaviour-based predictability

Map Building: The Problems 5.8 Map Building: The Problems 1. Map Maintaining: Keeping track of changes in the environment e.g. disappearing cupboard - e.g. measure of belief of each environment feature 2. Representation and Reduction of Uncertainty position of robot -> position of wall position of wall -> position of robot probability densities for feature positions additional exploration strategies

General Map Building Schematics 5.8.1 General Map Building Schematics

5.8.1 Map Representation M is a set n of probabilistic feature locations Each feature is represented by the covariance matrix St and an associated credibility factor ct ct is between 0 and 1 and quantifies the belief in the existence of the feature in the environment a and b define the learning and forgetting rate and ns and nu are the number of matched and unobserved predictions up to time k, respectively.

Autonomous Map Building Stochastic Map Technique 5.8.1 Autonomous Map Building Stochastic Map Technique Stacked system state vector: State covariance matrix:

Autonomous Map Building Example of Feature Based Mapping (EPFL) 5.8.1 Autonomous Map Building Example of Feature Based Mapping (EPFL)

5.8.2 Cyclic Environments Courtesy of Sebastian Thrun Small local error accumulate to arbitrary large global errors! This is usually irrelevant for navigation However, when closing loops, global error does matter

5.8.2 Dynamic Environments Dynamical changes require continuous mapping If extraction of high-level features would be possible, the mapping in dynamic environments would become significantly more straightforward. e.g. difference between human and wall Environment modeling is a key factor for robustness

Map Building: Exploration and Graph Construction 5.8 Map Building: Exploration and Graph Construction 1. Exploration - provides correct topology - must recognize already visited location - backtracking for unexplored openings 2. Graph Construction Where to put the nodes? Topology-based: at distinctive locations Metric-based: where features disappear or get visible