ECGR4161/5196 – July 26, 2011 Read Chapter 5 Exam 2 contents: Labs 0, 1, 2, 3, 4, 6 Homework 1, 2, 3, 4, 5 Book Chapters 1, 2, 3, 4, 5 All class notes.

Slides:



Advertisements
Similar presentations
Discussion topics SLAM overview Range and Odometry data Landmarks
Advertisements

(Includes references to Brian Clipp
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
Robot Localization Using Bayesian Methods
The GraphSLAM Algorithm Daniel Holman CS 5391: AI Robotics March 12, 2014.
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Probabilistic Robotics
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
Simultaneous Localization and Mapping
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Bayesian Robot Programming & Probabilistic Robotics Pavel Petrovič Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Localization David Johnson cs6370. Basic Problem Go from thisto this.
Probabilistic Robotics: Kalman Filters
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Robotic Mapping: A Survey Sebastian Thrun, 2002 Presentation by David Black-Schaffer and Kristof Richmond.
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Stanford CS223B Computer Vision, Winter 2006 Lecture 12 Filters / Motion Tracking 2 Professor Sebastian Thrun CAs: Dan Maynes-Aminzade, Mitul Saha, Greg.
Particle Filter/Monte Carlo Localization
Part 2 of 3: Bayesian Network and Dynamic Bayesian Network.
Robust Monte Carlo Localization for Mobile Robots
Monte Carlo Localization
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
HCI / CprE / ComS 575: Computational Perception
ROBOT MAPPING AND EKF SLAM
Bayesian Filtering for Robot Localization
Mobile Robot controlled by Kalman Filter
Markov Localization & Bayes Filtering
Localization and Mapping (3)
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Probabilistic Robotics: Monte Carlo Localization
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Young Ki Baik, Computer Vision Lab.
Stereo Object Detection and Tracking Using Clustering and Bayesian Filtering Texas Tech University 2011 NSF Research Experiences for Undergraduates Site.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
1 Distributed and Optimal Motion Planning for Multiple Mobile Robots Yi Guo and Lynne Parker Center for Engineering Science Advanced Research Computer.
The Hardware Design of the Humanoid Robot RO-PE and the Self-localization Algorithm in RoboCup Tian Bo Control and Mechatronics Lab Mechanical Engineering.
Mobile Robot Localization (ch. 7)
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
Robotics Club: 5:30 this evening
CSE-473 Project 2 Monte Carlo Localization. Localization as state estimation.
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
SLAM Tutorial (Part I) Marios Xanthidis.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Monte Carlo Localization for Mobile Robots Frank Dellaert 1, Dieter Fox 2, Wolfram Burgard 3, Sebastian Thrun 4 1 Georgia Institute of Technology 2 University.
10-1 Probabilistic Robotics: FastSLAM Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti, Maren Bennewitz, Christian Plagemann,
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
SLAM : Simultaneous Localization and Mapping
Mobile Robotics. Fundamental Idea: Robot Pose 2D world (floor plan) 3 DOF Very simple model—the difficulty is in autonomy.
Probabilistic Robotics
Particle Filter/Monte Carlo Localization
A Short Introduction to the Bayes Filter and Related Models
Probabilistic Map Based Localization
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

ECGR4161/5196 – July 26, 2011 Read Chapter 5 Exam 2 contents: Labs 0, 1, 2, 3, 4, 6 Homework 1, 2, 3, 4, 5 Book Chapters 1, 2, 3, 4, 5 All class notes 1

3 Fundamentals of Map Representation 1.Map precision must match precision of robot’s precision to achieve goals 2.Map precision and object representation must match sensor data precision 3.Map complexity has a direct impact on computational complexity Challenges in Map Representation References: Siegwart, Roland. Autonomous Mobile Robots. Cambridge, Massachusetts. The MIT Press, 2011, Major Issues with Map Representation Memory constraints on large and precise map representations How do you define objects and nodes in a map? (Indoor vs. Outdoor) What is the object? (Sensor shortcomings, vision research state of the art, etc.) Real world is dynamic (humans, nature, transient objects, etc.) Sensor fusion problem yet to be solved

Monte Carlo Localization for Mobile Robots Probabilistic approach to localization problems 3 Global position estimation– ability to determine the robot’s position in a previously learned map given no other info than the robot is somewhere on the map Local tracking is keeping track of that position over time The position measurements up to time k The 3D state vector Predicts current position using motion only (given by integration) Likelihood the robot is at location given was observed Posterior density over (Bayes Theorem) Is some type of known control input A: Uncertainty – represented by cloud of particles B: Robot moved 1 meter. Robot is somewhere within 1 meter radius C: Observing landmark.5 meters away in top-right corner D: Re-sampling from the weighted set – starting point for next iteration Jeffrey Skelnik

Monte Carlo Localization (Particle Filtering) Based on a collection of samples (also known as particles) Each sample consists of a possible location along with the probability (importance weights) that the robot is at that location More samples More accuracy at the expense of computational time Algorithm: 1- inputs: Distance u t, sensor reading z t., sampleset S t = { (x t (i), w t (i) ) | i = 1,...,n} 2- for i = 1 to n do // First update the current set of samples – x t = updateDist(x t, u t ) // Compute new location – w t (i) = prob( z t | x t (i) ) // Compute new probability 3- S t+1 = null // Then resample to get the next generation of samples 4- for i = 1 to n do – Sample an index j from the distribution given by the weights in S t – Add (x t (j), w t (j) ) to S t+1 // Add sample j to the set of new samples 5- return S t

Localization: Using Visual Cues and Vanishing Points 5 1)Saurer, Olivier, Marc Pollefeys, and Friedrich Fraundorfer. Visual Localization Using Global Visual Features and Vanishing Points. Computer Vision and Geometry Group. Web. 25th July Base image compared with best fit lines Flaws within the system… No two images are ever the same Needs a preexisting definition of “land marks” to search for Reflections brought out by surfaces

Markov Localization a probabilistic algorithm Equation (1) Markov Localization algorithm from [1] Fig(1) 3D array used in Markov Localization [2] References: Chapters 5 of the introduction to Autonomous Mobile Robots 6

Local Positioning - Passive Sonar Beacons 7 Local Positioning Systems use intentionally placed local (as opposed to global) ‘beacons’ to identify a reference within a space Reference Systems have the chief advantage of resolving error (incremental errors do not accumulate as in proprioceptive ‘dead reckoning’) Simple triangulation provides absolute localization data Passive references are the simplest to employ Sonar is just one example lasers, light, or radio also work Not effective in unprepared locales Video: Light-Reflector Passive Beacons Refs: 1)Microcontroller Based System for 2D Localization; Casanova, Quijada, Garcia- Bermejo, Gonzalez 2)Mobile Robot Localization by Tracking Geometric Beacons; Durant-Whyte

Turtle Bot Localization Using Kinect and I-Robot Base. The Turtle Bot utilizes a few sensors to determine its position based on a previously formed map or model of the area. It has a single axis gyro, edge sensors, Kinect sensor with IR sensors and a camera, and encoders to determine position, speed, and obstacles. It can be programmed to just avoid obstacles, identify object shapes, or stay within map/model boundaries. Reference (image and info) :

Localization Methods for a Mobile Robot in Urban Environment 9 Outdoor urban environments. pose unique set of challenges Different from both indoor and outdoor open environment. Open space localization :- Odometry- minute errors accumulate over time, The error can not be bounded without the use of sensor such as GPS. Digital Compass- helps to find orientation The GPS receiver allows the position uncertainty from accumulating beyond acceptable limits. Visual Localization :- On-demand updates, only when the open-space configuration fails. On demand allows more time for image processing operations which increases the robustness of the overall system. Reference:- ieeexplore.ieee.org/iel5/8860/29531/ pdf

use a measurement model (MM) to incorporate information from the sensors to obtain the posterior PDF p(x k |Z k ) assume that the measurement z k is conditionally independent of Z k-1 given x k, and that the MM is given in terms of a likelihood p(z k |x k ) The posterior density over x k is obtained using Bayes theorem: Monte Carlo Localization for Mobile Robots key advantages: represent multi-modal distributions and thus can globally localize a robot and drastically reduces the amount of memory required Prediction Phase:Update Phase: use a motion model to predict the current position of the robot in the form of a predictive PDF p(x k |Z k-1 ) Current state x k is dependent on the previous state x k-1 & known control input u k-1. the motion model is specified as a conditional density p(x k |x k-1, u k-1 ) predictive density over x k is then obtained by integration: p(x k |Z k-1 ) = ∫p(x k |x k-1,u k-1 ) p(x k-1 |Z k-1 ) dx k-1 [1] Frank Dellaert, Dieter Fox, Wolfram Burgard, and Sebastian Thrun, "Monte Carlo Localization for Mobile Robots," IEEE International Conference on Robotics and Automation (ICRA99), May, 1999.Frank DellaertDieter FoxSebastian Thrun

Kalman Filters Definition: A Kalman Filter (KF) is a state estimator that works on a prediction-correction basis. It depends on the reference coordinate system (x, y, φ) [1]. 11 Principle: The KF is used as a control for a dynamic system and for state estimation. It provides the information that cannot directly be measured by estimating the values of the variables from indirect and noisy measurements [1] Sources: [1] Robot Localization and Kalman Filters ( (a) True and Estimated trajectory of square driving robot with measurements (b) Posterior estimation error (solid black), 95% confidence interval (solid blue), run with and without measurements (solid green and solid red, respectively) Figure 1: Position Tracking with Corrections