Planning Strategies RSS II Lecture 6 September 21, 2005.

Slides:



Advertisements
Similar presentations
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
Advertisements

Links and Joints.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Probabilistic Robotics
Probabilistic Robotics SLAM. 2 Given: The robot’s controls Observations of nearby features Estimate: Map of features Path of the robot The SLAM Problem.
(Includes references to Brian Clipp
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Robot Localization Using Bayesian Methods
Introduction to Probabilistic Robot Mapping. What is Robot Mapping? General Definitions for robot mapping.
Neurocomputing,Neurocomputing, Haojie Li Jinhui Tang Yi Wang Bin Liu School of Software, Dalian University of Technology School of Computer Science,
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
Simultaneous Localization and Mapping
Probabilistic Robotics Probabilistic Motion Models.
Search: Representation and General Search Procedure Jim Little UBC CS 322 – Search 1 September 10, 2014 Textbook § 3.0 –
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Probabilistic Robotics: Motion Model/EKF Localization
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
Based on slides by Nicholas Roy, MIT Finding Approximate POMDP Solutions through Belief Compression.
SLAM: Simultaneous Localization and Mapping: Part II BY TIM BAILEY AND HUGH DURRANT-WHYTE Presented by Chang Young Kim These slides are based on: Probabilistic.
Crash Course on Machine Learning
Bayesian Filtering for Robot Localization
1 DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Second Quarterly IPR Meeting January 13, 1999 P. I.s: Leonidas J. Guibas and.
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Markov Localization & Bayes Filtering
Agenda Path smoothing PID Graph slam.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
1 Robot Motion and Perception (ch. 5, 6) These two chapters discuss how one obtains the motion model and measurement model mentioned before. They are needed.
Probabilistic Robotics: Monte Carlo Localization
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
TKK | Automation Technology Laboratory Partially Observable Markov Decision Process (Chapter 15 & 16) José Luis Peralta.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.1: Bayes Filter Jürgen Sturm Technische Universität München.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 5.1: State Estimation Jürgen Sturm Technische Universität München.
Machine Learning CUNY Graduate Center Lecture 4: Logistic Regression.
MIDPOINT CIRCLE & ELLIPSE GENERARTING ALGORITHMS
Probability Course web page: vision.cis.udel.edu/cv March 19, 2003  Lecture 15.
Overivew Occupancy Grids -Sonar Models -Bayesian Updating
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
1 Energy-Efficient Mobile Robot Exploration Yongguo Mei, Yung-Hsiang Lu Purdue University ICRA06.
Probabilistic Robotics
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
Learning for Physically Diverse Robot Teams Robot Teams - Chapter 7 CS8803 Autonomous Multi-Robot Systems 10/3/02.
Artificial Intelligence in Game Design Lecture 20: Hill Climbing and N-Grams.
10-1 Probabilistic Robotics: FastSLAM Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti, Maren Bennewitz, Christian Plagemann,
Randomized KinoDynamic Planning Steven LaValle James Kuffner.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
ECE 448 Lecture 4: Search Intro
Probabilistic Robotics
Simultaneous Localization and Mapping
Markov ó Kalman Filter Localization
Robust Belief-based Execution of Manipulation Programs
Probabilistic Robotics
Day 29 Bug Algorithms 12/7/2018.
Day 29 Bug Algorithms 12/8/2018.
Probabilistic Map Based Localization
Configuration Space of an Articulated Robot
CS639: Data Management for Data Science
Physics-guided machine learning for milling stability:
Presentation transcript:

Planning Strategies RSS II Lecture 6 September 21, 2005

Today Strategies –What do we want from the planner? Tools –Robust motion planning –Exploration algorithms

Planner Capabilities Motion Planning –How do we get from the hangar to a brick and back? Robust Motion Planning –How do we avoid getting lost? Localization Recovery –We’re lost. How do we recover? Exploration –Our map is incomplete/wrong. How do we get more data to build a better one?

We can compute the “true” potential at each point x by integrating the forces along the desired path from the goal to x Let’s use Dijkstra’s algorithm, with the cost of an action given by Numerical Potential Functions C(x) = F(x) =  U att (x)-  U rep (x)

1.Initialize all states with value ∞ 2.Label the goal with value 0 3.Update all states so that f(x)=min(c(x,y)+f(y)) 4.Repeat until all states labelled Numerical Potential Functions

Uniform Cost Regression After planning, for each state, just look at the neighbours and move to the cheapest one, i.e., just roll down hill Initialize all states with value ∞ Label the goal with value 0 Update all states so that f(x)=min(c(x,y)+f(y)) Repeat Initialize all states with value ∞ Label the goal with value 0 Update all states so that f(x)=min(c(x,y)+f(y)) Repeat

The Output Value Function

Motion is Uncertain Motion model is Gaussian about translation and rotation

Brittle Navigation Estimated path Actual path

So what happened?

Sensors are Uncertain L1L1 L2L2 L3L3

Sensors can be blind L1L1 L2L2 L3L3 Maximum sensor radius

GPS and Uncertainty 8 Satellites 3 Satellites 1 Satellite Gray level intensity corresponds with pose certainty

1.Initialize all states with value -∞ 2.Label the goal states with value f(y) = E y (r) 3.Update all states so that f(x)=max(c(x,y)+f(y)) 4.Repeat Coastal Navigation Represent state using Entropy: a probabilistic measure of uncertainty

R(x) ~ Model Parameters Reward function x1x1 x2x2 x3x3 p(x) Back-project to high dimensional belief Compute expected reward from belief: ~

xaxa ~ Model Parameters p(x)p’(x) xixi xjxj a, z 1 Low dimension Full dimension a, z 1 Deterministic and stochastic processes z1z1 ~ ~ f(x i, a)=? ~ xjxj ~ xixi ~ a z

xixi ~ ~ ~ ~ Model Parameters Use forward model xqxq xkxk a z2z2 xjxj a z1z1 1. For each belief x i and action a 2. Generate full belief p(x) 3. For each observation z 1. Compute p(z|x, a) 2. Compute posterior p’(x) and projection x j 3. Set T(x i, a, x j ) to p(z|x i, a) ~ ~ ~~

A Better Trajectory

Localization Recovery Something horrible has happened. Where is the robot? Something horrible has happened. Where is the robot? ?

Localization Recovery

If uncertainty is too large: –Hypothesize an action a –Change each possible position p to p’ as if the robot were at p and took action a –Hypothesize what measurement you might get –Compute a new set of possible positions –Choose the action with the most certain posterior set of positions

Localization Recovery

Exploration ? Frontier of unexplored space Frontier of unexplored space

Exploration If pose uncertainty is small –Choose to visit nearest frontier Else –Use relocalization algorithm to reduce uncertainty

Exploration If there is a frontier that we –can visit, –but get no useful measurements in the unexplored space, –and still be able to return to the hangar Then –Visit that frontier Else –Use relocalization algorithm to first reduce uncertainty

Carmen Module APIs Who should decide what the planner should do? –Get blocks? Relocalize? Explore? –Who should be in charge of the robot? What knowledge should the module require? –Features? Distances? Landmarks? Directions? Maps? –Remaining power? GPS strength? Wireless network strength? What questions should module answer? –Can we get to a brick? How far? Will we get lost? –Can we find the hangar? With what confidence? –How many bricks can we collect? –Where are the unexplored regions? How should the planner integrate multiple sensor streams? –How should the planner handle vision data? GPS? Laser data? WiFi data? Spiral development –Put simple APIs in place, even if performance is stubbed

Conclusion –Always use the full position estimate from the estimation algorithms –Make decisions that respect current uncertainty –Incorporate future uncertainty into plans