Simultaneous Localization and Mapping

Slides:



Advertisements
Similar presentations
Discussion topics SLAM overview Range and Odometry data Landmarks
Advertisements

Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Probabilistic Robotics
Probabilistic Robotics
(Includes references to Brian Clipp
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Probabilistic Robotics
MCL -> SLAM. x: pose m: map u: robot motions z: observations.
Localization David Johnson cs6370. Basic Problem Go from thisto this.
CS 326 A: Motion Planning Planning Exploration Strategies.
CS 188: Artificial Intelligence Fall 2009 Lecture 20: Particle Filtering 11/5/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Robotic Mapping: A Survey Sebastian Thrun, 2002 Presentation by David Black-Schaffer and Kristof Richmond.
Probabilistic Methods in Mobile Robotics. Stereo cameras Infra-red Sonar Laser range-finder Sonar Tactiles.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Probabilistic Robotics
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Navigation Jeremy Wyatt School of Computer Science University of Birmingham.
Probabilistic Robotics
1 Spring 2007 Research Log Joseph Djugash. 2 The Problem Localize a large network of nodes with the following constraints: Resource Limitation power,
1 CMPUT 412 Autonomous Map Building Csaba Szepesvári University of Alberta TexPoint fonts used in EMF. Read the TexPoint manual before you delete this.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
Grid Maps for Robot Mapping. Features versus Volumetric Maps.
SA-1 Probabilistic Robotics Bayes Filter Implementations Discrete filters.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Particle Filter & Search
Localization and Mapping (3)
3D SLAM for Omni-directional Camera
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Young Ki Baik, Computer Vision Lab.
Particle Filters.
Mobile Robot Localization (ch. 7)
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
Robotics Club: 5:30 this evening
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
Simultaneous Localization and Mapping (SLAM). Localization Perfect Map + Observations with errors = Pretty good Localization (Average out errors in observations,
Principle Investigator: Lynton Dicks Supervisor: Karen Bradshaw CO-OPERATIVE MAPPING AND LOCALIZATION OF AUTONOMOUS ROBOTS.
1/17/2016CS225B Kurt Konolige Probabilistic Models of Sensing and Movement Move to probability models of sensing and movement Project 2 is about complex.
SLAM Tutorial (Part I) Marios Xanthidis.
Page 0 of 7 Particle filter - IFC Implementation Particle filter – IFC implementation: Accept file (one frame at a time) Initial processing** Compute autocorrelations,
Fast SLAM Simultaneous Localization And Mapping using Particle Filter A geometric approach (as opposed to discretization approach)‏ Subhrajit Bhattacharya.
Robust Localization Kalman Filter & LADAR Scans
Particle Filter for Robot Localization Vuk Malbasa.
10-1 Probabilistic Robotics: FastSLAM Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti, Maren Bennewitz, Christian Plagemann,
SLAM : Simultaneous Localization and Mapping
Mobile Robotics. Fundamental Idea: Robot Pose 2D world (floor plan) 3 DOF Very simple model—the difficulty is in autonomy.
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Last update on June 15, 2010 Doug Young Suh
Introduction to particle filter
Non-parametric Filters
Particle Filter/Monte Carlo Localization
Introduction to Robot Mapping
Introduction to particle filter
Non-parametric Filters
Non-parametric Filters
Non-parametric Filters
Probabilistic Map Based Localization
Hidden Markov Models Markov chains not so useful for most agents
Probabilistic Robotics
Principle of Bayesian Robot Localization.
Non-parametric Filters: Particle Filters
Non-parametric Filters
Non-parametric Filters: Particle Filters
Simultaneous Localization and Mapping
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

Simultaneous Localization and Mapping SLAM Simultaneous Localization and Mapping

Reinforcement learning to combine different map representations Occupancy Grid Feature Map Localization Particle filters FastSLAM Reinforcement learning to combine different map representations

Occupancy grid / grid map Simple black-white picture Good for dense places

Feature map Good for sparse places

Localization Map is known sensors data and robots kinematics is known Determine the position

Localization Discrete time 𝜃 – landmarks position 𝑠 𝑡 - robots position 𝑢 𝑡 - control 𝑧 𝑡 - sensor information 𝑠 1 𝑢 1 𝑧 1 → 𝑠 2 𝑠 2 𝑢 2 𝑧 2 → 𝑠 3

Particle filter requirements Motion model 𝑝 𝑠 𝑡 𝑢 𝑡 , 𝑠 𝑡−1 ) If current position is (𝑥,𝑦) and the robot movement is 𝑑𝑥, 𝑑𝑦 new coordinates are (𝑥+𝑑𝑥, 𝑦+𝑑𝑦) + noice Usually the noise is Gaussian

Particle filter requirements Measurement model 𝑝 𝑧 𝑡 𝑠 𝑡 , 𝜃, 𝑛 𝑡 ) 𝜃 – collection of landmark position 𝜃 1 , 𝜃 2 , … 𝜃 𝐾 𝑛 𝑡 - landmark observed at time 𝑡 In simple case each landmark is uniquely identifiable

Particle filter We have N particles Each particle is simply current position For each particle: Update its position using motion model Assign a weight using measurement model Normalize importance weights such that their sum is 1 Resample N particles with probabilities proportional to the weight

Particle filter code

SLAM In SLAM problem we try to build a map. Most common methods: Kalman filters (Normal distribution in high-dimensional space) Particle filter (what a particle represents here?)

FastSLAM We try to determine robot and landmarks locations based on control and sensor data N particles Robot position Gaussian distribution for each of K landmarks Time complexity O 𝑁 log 𝐾 Space complexity - ?

FastSLAM If we know the path ( 𝑠 1 … 𝑠 𝑡 ) 𝜃 1 and 𝜃 2 are independent

FastSLAM 𝑝 𝑠 𝑡 , 𝜃 𝑧 𝑡 , 𝑢 𝑡 , 𝑛 𝑡 )=𝑝 𝑠 𝑡 𝑧 𝑡 , 𝑢 𝑡 , 𝑛 𝑡 )∏ 𝑝 𝜃 𝑘 𝑠 𝑡 , 𝑧 𝑡 , 𝑢 𝑡 , 𝑛 𝑡 ) We have K+1 problems: Estimation of the path Estimation of landmarks location made using Kalman filter.

FastSLAM Weights calculation: Position of a landmark is modeled by Gaussian 𝑤 𝑡 ~ 𝑝 𝑠 𝑡 𝑧 𝑡 , 𝑢 𝑡 , 𝑛 𝑡 ) 𝑝 𝑠 𝑡 𝑧 𝑡−1 , 𝑢 𝑡 , 𝑛 𝑡−1 ) ~ 𝑝 𝑧 𝑡 𝜃 𝑛 𝑡 , 𝑠 𝑡 , 𝑛 𝑡 )𝑝( 𝜃 𝑛 𝑡 ) 𝑑 𝜃 𝑛 𝑡

FastSLAM FastSLAM saves landmark positions in a balanced binary tree. Size of the tree is 𝑂 𝐾 Sampled particle differs from the previous one in only one leaf.

FastSLAM We just create new tree on top of the previous one. Complexity 𝑂 log 𝐾 Video 2

Combining different map representation There are many ways how we represent a map How we can combine them? Grid map Feature map

Model selection Map parameters: Observation likelihood For given particle we get likelihood of laser observation Average for all particles 𝐼 = 1 𝑁 ∑𝑝( 𝑧 𝑡 | 𝜃 𝑡 , 𝑠 𝑡 , 𝑛 𝑡 ) Between 0 and 1, large values mean good map 𝑁 𝑒𝑓𝑓 - effective sample size 𝑁 𝑒𝑓𝑓 = 1 ∑ 𝑤 2 , here we assume that ∑𝑤=1 It is a measure of variance in weight. Suppose all weights are the same, what is 𝑁 𝑒𝑓𝑓 ?

Reinforcement learning for model selection SARSA (State-Action-Reward-State-Action) Actions: { 𝑎 𝑔 , 𝑎 𝑓 } – use grid map of feature map States S = 𝐼 × 𝑁 𝑒𝑓𝑓 𝑓 < 𝑁 𝑒𝑓𝑓 𝑔 ×{𝑓𝑒𝑎𝑡𝑢𝑟𝑒 𝑑𝑒𝑡𝑒𝑐𝑡𝑒𝑑} 𝐼 is divided into 7 intervals (0 0.15 0.30 0.45 0.6 0.75 0.9 1) Feature detected – determines weather a feature was detected on current step. 7×2×2=28 states

Reinforcement learning for model selection Reward: For simulations correct robot position is known. Deviation from the correct position gives negative reward. 𝜖-Greedy, 𝜖=0.6 Learning rate 𝛼=0.001 Discounting factor 𝛾=0.9

The algorithm

The algorithm

Results

Multi-robot SLAM If the environment is large using only one robot is not enough Centralized approach – the map is merged than the entire environment is explored Decentralized approach – robots merge their maps than they meet each other

Multi-robot SLAM We need to transform frame of references.

Reinforcement learning for model selection Two robots meat each other and decide how they share their information Actions 𝑎 𝑑𝑚 - don’t merge maps 𝑎 𝑚𝑖 - merge with simple transformation matrix 𝑎 𝑚𝑔 – use grid-based heuristic to improve transformation matrix 𝑎 𝑚𝑓 - use feature-based heuristic

Reinforcement learning for model selection States 𝐼 𝑔 , 𝐼 𝑓 , 𝑁 𝑒𝑓𝑓 𝑐 < 𝑁 2 , 𝑁 𝑒𝑓𝑓 𝑜 < 𝑁 2 3×3×2×2=36 states 𝐼 𝑔 - confidence for the transformation matrix for grid- bases method, 3 intervals (0 1 3 2 3 1)

Reinforcement learning for model selection Reward For simulations correct robot position is known – we can get cumulative error for robot position 𝑟= 𝐸 𝑐𝑢𝑚 𝜇 − 𝐸 𝑐𝑢𝑚 𝐸 𝑐𝑢𝑚 𝜇 - average cumulative error achieved by several runs where the robots immediately merge. 𝜖- Greedy policy 𝜖=0.1 𝛼=0.001 𝛾=0.9

Results

References http://www.sce.carleton.ca/faculty/schwartz/RCTI/Semi nar%20Day/Autonomous%20vehicles/theses/dinnissen -thesis.pdf http://www.cs.cmu.edu/~mmde/mmdeaaai2002.pdf http://www.sciencedirect.com/science/article/pii/S092 1889009001481?via=ihub http://www- personal.acfr.usyd.edu.au/nebot/publications/slam/IJRR _slam.htm

Questions