Where’s the Robot? Ross Mead April 3 rd, 2008. Where’s the Robot? Given an initial estimate P(0) of the robot’s location in the configuration space, maintain.

Slides:



Advertisements
Similar presentations
Mobile Robot Locomotion
Advertisements

Sonar and Localization LMICSE Workshop June , 2005 Alma College.
Odometry Error Detection & Correction - Sudhan Kanitkar.
(Includes references to Brian Clipp
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Probabilistic Robotics Probabilistic Motion Models.
Probabilistic Robotics Probabilistic Motion Models.
Probabilistic Robotics Probabilistic Motion Models.
The City College of New York 1 Prepared by Dr. Salah Talha Mobot: Mobile Robot Introduction to ROBOTICS.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Probabilistic Robotics
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
Introduction to ROBOTICS
Particle Filter/Monte Carlo Localization
Monte Carlo Localization
Probabilistic Robotics: Motion Model/EKF Localization
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
Mechatronics 1 Week 11. Learning Outcomes By the end of week 11 session, students will understand some sorts of mobile robot and locomotion of wheeled.
Bayesian Filtering for Robot Localization
1 CMPUT 412 Motion Control – Wheeled robots Csaba Szepesvári University of Alberta TexPoint fonts used in EMF. Read the TexPoint manual before you delete.
Localisation & Navigation
9/14/2015CS225B Kurt Konolige Locomotion of Wheeled Robots 3 wheels are sufficient and guarantee stability Differential drive (TurtleBot) Car drive (Ackerman.
Markov Localization & Bayes Filtering
Localization and Mapping (3)
Wheeled Robots ~ 1.5 cm to a side temperature sensor & two motors travels 1 inch in 3 seconds untethered !!
Introduction to ROBOTICS
1 Robot Motion and Perception (ch. 5, 6) These two chapters discuss how one obtains the motion model and measurement model mentioned before. They are needed.
Probabilistic Robotics Probabilistic Motion Models.
Beyond trial and error…. Establish mathematically how robot should move Kinematics: how robot will move given motor inputs Inverse-kinematics: how to.
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
ECGR4161/5196 – July 26, 2011 Read Chapter 5 Exam 2 contents: Labs 0, 1, 2, 3, 4, 6 Homework 1, 2, 3, 4, 5 Book Chapters 1, 2, 3, 4, 5 All class notes.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Lecture 22 Dimitar Stefanov.
Mobile Robot Navigation Using Fuzzy logic Controller
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
The Hardware Design of the Humanoid Robot RO-PE and the Self-localization Algorithm in RoboCup Tian Bo Control and Mechatronics Lab Mechanical Engineering.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
Minds and Computers 3.1 LEGO Mindstorms NXT l Atmel 32-bit ARM processor l 4 inputs/sensors (1, 2, 3, 4) l 3 outputs/motors (A, B, C) l 256 KB Flash Memory.
Minds and Computers 1.1 What is a robot? l Definitions ä Webster: a machine that looks like a human being and performs various acts (as walking and talking)
Lecture 23 Dimitar Stefanov. Wheelchair kinematics Recapping Rolling wheels Instantaneous Centre of Curvature (ICC) motion must be consistent Nonholonomic.
Robotics Club: 5:30 this evening
Probabilistic Robotics
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Kinematics using slides from D. Lu. Goals of this class Introduce Forward Kinematics of mobile robots How Inverse Kinematics for static and mobile robots.
3/11/2016CS225B Kurt Konolige Probabilistic Models of Sensing and Movement Move to probability models of sensing and movement Project 2 is about complex.
Basilio Bona DAUIN – Politecnico di Torino
James Irwin Amirkhosro Vosughi Mon 1-5pm
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Gaits Cost of Transportation Wheeled Mobile Robots Most popular locomotion mechanism Highly efficient Simple mechanical implementation Balancing is.
TATVA INSTITUTE OF TECHNOLOGICAL STUDIES, MODASA (GTU)
Probabilistic Robotics
Path Curvature Sensing Methods for a Car-like Robot
Locomotion of Wheeled Robots
Robots with four wheels
Probabilistic Robotics
Day 29 Bug Algorithms 12/7/2018.
Day 29 Bug Algorithms 12/8/2018.
A Short Introduction to the Bayes Filter and Related Models
Motion Models (cont) 2/16/2019.
Probabilistic Map Based Localization
Kinematics of Wheeled Robots
Today: AI Control & Localization Monday: Hunting Demonstration
Presentation transcript:

Where’s the Robot? Ross Mead April 3 rd, 2008

Where’s the Robot? Given an initial estimate P(0) of the robot’s location in the configuration space, maintain an ongoing estimate of the robot pose P(t) at time t with respect to the map.

Configuration Space (C-space) A set of “reachable” areas constructed from knowledge of both the robot and the world. How to create it… – abstract the robot as a point object – enlarge the obstacles to account for the robot’s footprint and degrees-of-freedom

Configuration Space (C-space) Footprint – the amount of space a robot occupies Degrees-of-Freedom (DoF) – number of variables necessary to fully describe a robot’s “pose” in space – How many DoF does the Create have?

Configuration Space (C-space) Obstacles Free Space Robot (treat as point object) (x, y,  )

“We don’t need no stinkin’ sensors!” Send a movement command to the robot. Assume command was successful… – set pose to expected pose following the command But… robot movement is not perfect… – imperfect hardware (yes… blame hardware… ) – wheel slippage – discrepancies in wheel circumferences – skid steering

Reasons for Motion Errors bump ideal case different wheel diameters carpet and many more …

What Do We Know About the World? Proprioception sensing things about one's own internal status Common proprioceptive sensors are: – thermal – hall effect – optical – contact Exteroception sensing things about the environment Common exteroceptive sensors are: – electromagnetic spectrum – sound – touch – smell/odor – temperature – range – attitude (Inclination)

Overview of Location

Locomotion Power of motion from place to place. Differential drive (Pioneer 2-DX, iRobot Create) Car drive (Ackerman steering) Synchronous drive (B21) Mecanum wheels, XR4000

ICC For rolling motion to occur, each wheel has to move along its y -axis. Instantaneous Center of Curvature (ICC)

Differential Drive Differences in the velocities of wheels determines the turning angle of the robot. Forward Kinematics – Given the wheels’ velocities (or positions), what is the robot’s velocity/position ?

Motion Model Kinematics – The effect of a robot’s geometry on its motion. – If the motors move this much, where will the robot be? Two types of common motion models: – Odometry-based – Velocity-based (“ded reckoning”) Odometry-based models are used when systems are equipped with wheel encoders. Velocity-based models have to be applied when no wheel encoders are given… – calculate new pose based on velocities and time elapsed – in the case of the Creates, we focus on this model

Ded Reckoning “Do you mean, ‘dead reckoning’”?

Ded Reckoning That’s not a typo… – “ded reckoning” = deduced reckoning reckon  to determine by reference to a fixed basis Keep track of the current position by noting how far the robot has traveled on a specific heading… – used for maritime navigation – uses proprioceptive sensors in the case of the Creates, we utilize velocity control

Ded Reckoning 1.Specify system measurements… consider possible coordinate systems 2.Determine the point (radius) about which the robot is turning… to minimize wheel slippage, this point (the ICC ) must lie at the intersection of the wheels’ axles 3.Determine the angular velocity ω at which the robot is turning to obtain the robot velocity v … each wheel must be traveling at the same ω about the ICC 4.Integrate to find position P(t)… the ICC changes over time t

R ICC  (x, y) y  x vlvl vrvr Ded Reckoning Of these five, what’s known and what’s not? Thus, ( v r and v l in mm / sec )

ICC R P(t)P(t) P(t+t)P(t+t) Ded Reckoning or

ICC R P(t)P(t) P(t+t)P(t+t) Ded Reckoning or This is kinematics… Sucks,… don’t it… ?

 (Adding It All Up) Update the wheel velocities and, thus, robot velocity information at each sensor update… How large can/should a single segment be?

Example Move Function // move x centimeters (x > 0), // vel in mm/sec (-500 to 500) void move(float x, int vel) // move is approximate { int dist = (int)(x * 10.0); // change cm to mm create_distance(); // update Create internal distance gc_distance = 0; // and initialize IC’s distance global msleep(50L); // pause before next signal to Create if (dist != 0) { create_drive_straight(vel); if (vel > 0) while (gc_distance < dist) create_distance(); else while (gc_distance > -dist) create_distance(); msleep(50L); // pause between distance checks } create_stop(); // stop } // move(float, int)

Example Turn Function // deg > 0 turn left (CCW), // deg < 0 turn right (CW), // vel in mm/sec (0 to 500) void turn(int deg, int vel) // turn is approximate { create_angle(); msleep(50L); // initialize angle gc_total_angle = 0; // and update IC’s angle global if (deg > 0) { create_spin_CCW(vel); while (gc_total_angle < deg) { create_angle(); msleep(50L); } else { create_spin_CW(vel); while (gc_total_angle > deg) { create_angle(); msleep(50L); } create_stop(); // stop } // turn(int, int)

Putting It All Together How can we modify move(..) and turn(..) to implement ded reckoning to maintain the robot’s pose at all times? – I leave this to you as an exercise…

Types of Mobile Robot Bases Holonomic A robot is holonomic if it can move to change its pose instantaneously in all available directions. Non-holonomic A robot is non-holonomic if it can not move to change its pose instantaneously in all available directions.

Types of Mobile Robot Bases Ackerman Drive – typical car steering – non-holonomic

Types of Mobile Robot Bases Omni Drive – wheel capable of rolling in any direction – robot can change direction without rotating base Synchro Drive

Types of Mobile Robot Bases

Dead Reckoning Ded reckoning makes hefty assumptions… – perfect traction with ground (no slippage) – identical wheel circumferences – ignores surface area of wheels (no skid steering) – sensor error and uncertainty…

Dead Reckoning

What’s the Problem? Sensors are the fundamental input for the process of perception… – therefore, the degree to which sensors can discriminate the world state is critical Sensor Aliasing – many-to-one mapping between environmental states to the robot’s perceptual inputs – amount of information is generally insufficient to identify the robot’s position from a single reading

What’s the Problem? Sensor Noise – adds a limitation on the consistency of sensor readings – often the source of noise is that some environmental features are not captured by the robot’s representation Dynamic Environments Unanticipated Events Obstacle Avoidance

robot trackingrobot kidnapping global problemlocal problem Where am I? ? Localization

robot trackingrobot kidnapping global problemlocal problem Only local data! (even perfect data) Localization

robot trackingrobot kidnapping global problemlocal problem Direct map-matching can be overwhelming Localization

Initial (uniform) distribution Key idea: keep track of a probability distribution for where the robot might be in the known map black - blue - red - cyan Where’s this? Monte Carlo Localization (MCL)

Key idea: keep track of a probability distribution for where the robot might be in the known map Initial (uniform) distributionIntermediate stage 1 blue black - blue - red - cyan Monte Carlo Localization (MCL)

Key idea: keep track of a probability distribution for where the robot might be in the known map Initial (uniform) distributionIntermediate stage 2 black - blue - red - cyan red blue Monte Carlo Localization (MCL)

Key idea: keep track of a probability distribution for where the robot might be in the known map Initial (uniform) distributionIntermediate stagesFinal distribution But how? cyan black - blue - red - cyan Monte Carlo Localization (MCL)

Bag o’ tricks Bayes’ rule p( A | B ) = p( B | A ) p( A ) p( B ) Definition of marginal probability p( A ) = p( A | B ) p(B)  all B p( A ) = p( A  B )  all B Definition of conditional probability p( A  B ) = p( A | B ) p(B) What are these saying? Deriving MCL

The robot does (or can be modeled to) alternate between We do know m -- the map of the environment p( o | r, m ) p( r new | r old, a, m ) -- the sensor model = the accuracy of performing action a -- the motion model (or will know) We want to know P(t) -- the pose of the robot at time t but we’ll settle for p( P(t) ) -- a probability distribution for P(t) ! What kind of thing is p( P(t) ) ? sensing -- getting range observations o 1, o 2, o 3, …, o t-1, o t acting -- driving around (or ferrying?) a 1, a 2, a 3, …, a t-1 “local maps” whence? Setting Up the Problem

p( o | r, m ) sensor model map m and location r p( | r, m ) =.95 p( | r, m ) =.05 potential observations o “probabilistic kinematics” -- encoder uncertainty red lines indicate commanded action the cloud indicates the likelihood of various final states p( r new | r old, a, m ) action model Sensor Model

Key question: We may know where our robot is supposed to be, but in reality it might be somewhere else… V R (t) V L (t) starting position supposed final pose x y lots of possibilities for the actual final pose What should we do? Probabilistic Kinematics

p( o | r, m ) sensor model p( r new | r old, a, m ) action model (0) Model the physics of the sensor/actuators (with error estimates) theoretical modeling (1) Measure lots of sensing/action results and create a model from them empirical modeling take N measurements, find mean (m) and st. dev. (  ), and then use a Gaussian model or, some other easily-manipulated (probability?) model... p( x ) = 0 if |x-m| > s 1 otherwise1- |x-m|/s otherwise 0 if |x-m| > s Robot Models: How-To

Create a program that will run your robot in a square (~2m to a side), pausing after each side before turning and proceeding For 10 runs, collect both the odometric estimates of where the robot thinks it is and where the robot actually is after each side. You should end up with two sets of 30 angle measurements and 40 length measurements: one set from odometry and one from “ground- truth.” Find the mean and the standard deviation of the differences between odometry and ground truth for the angles and for the lengths – this is the robot’s motion uncertainty model. start and “end” This provides a probabilistic kinematic model. MODEL the error in order to reason about it! Running around in squares

Start by assuming p( r 0 ) is the uniform distribution. take K samples of r 0 and weight each with a “probability” of 1/K dimensionality?! “Particle Filter” representation of a probability distribution Monte Carlo Localization (MCL)

Start by assuming p( r 0 ) is the uniform distribution. Get the current sensor observation, o 1 For each sample point r 0 multiply the importance factor by p(o 1 | r 0, m) take K samples of r 0 and weight each with a “probability” of 1/K “probability” Monte Carlo Localization (MCL)

Normalize (make sure the importance factors add to 1) You now have an approximation of p(r 1 | o 1, …, m) and the distribution is no longer uniform How did this change? Start by assuming p( r 0 ) is the uniform distribution. Get the current sensor observation, o 1 For each sample point r 0 multiply the importance factor by p(o 1 | r 0, m) take K samples of r 0 and weight each with a “probability” of 1/K Monte Carlo Localization (MCL)

Create new samples by dividing up large clumps each point spawns new ones in proportion to its importance factor Normalize (make sure the importance factors add to 1) You now have an approximation of p(r 1 | o 1, …, m) and the distribution is no longer uniform How did this change? Start by assuming p( r 0 ) is the uniform distribution. Get the current sensor observation, o 1 For each sample point r 0 multiply the importance factor by p(o 1 | r 0, m) take K samples of r 0 and weight each with a “probability” of 1/K Monte Carlo Localization (MCL)

The robot moves, a 1 For each sample r 1, move it according to the model p(r 2 | a 1, r 1, m) Where do the purple ones go? Create new samples by dividing up large clumps each point spawns new ones in proportion to its importance factor Normalize (make sure the importance factors add to 1) You now have an approximation of p(r 1 | o 1, …, m) and the distribution is no longer uniform How did this change? Start by assuming p( r 0 ) is the uniform distribution. Get the current sensor observation, o 1 For each sample point r 0 multiply the importance factor by p(o 1 | r 0, m) take K samples of r 0 and weight each with a “probability” of 1/K Monte Carlo Localization (MCL)

The robot moves, a 1 For each sample r 1, move it according to the model p(r 2 | a 1, r 1, m) Where do the purple ones go? Create new samples by dividing up large clumps each point spawns new ones in proportion to its importance factor Normalize (make sure the importance factors add to 1) You now have an approximation of p(r 1 | o 1, …, m) and the distribution is no longer uniform How did this change? Start by assuming p( r 0 ) is the uniform distribution. Get the current sensor observation, o 1 For each sample point r 0 multiply the importance factor by p(o 1 | r 0, m) take K samples of r 0 and weight each with a “probability” of 1/K Increase all the indices by 1 and keep going! Monte Carlo Localization (MCL)

“Monte Carlo” Localization -- refers to the resampling of the distribution each time a new observation is integrated Rhino Minerva Monte Carlo Localization (MCL)

“Monte Carlo” Localization -- refers to the resampling of the distribution each time a new observation is integrated Monte Carlo Localization (MCL)

Plusses Drawbacks Simple algorithm Well-motivated via probabilistic reasoning It has worked well in practice! Taking a step back…

Plusses Naturally fuses data from very disparate sensors! Drawbacks Simple algorithm Well-motivated via probabilistic reasoning Doesn’t require control of the robot: passive localization It’s an any-time algorithm It has worked well in practice! Taking a step back…

Plusses Naturally fuses data from very disparate sensors! Drawbacks Simple algorithm Well-motivated via probabilistic reasoning Any-time may not be enough ! Empty distributions Doesn’t require control of the robot: passive localization Doesn’t use the robot control available: active localization It’s an any-time algorithm It has worked well in practice! Taking a step back…

Questions? Thanks!