AI Robotics Concepts Chapter 25. Content Tasks Effectors Sensors Agent Architectures Actions in continuous space.

Slides:



Advertisements
Similar presentations
David Rosen Goals  Overview of some of the big ideas in autonomous systems  Theme: Dynamical and stochastic systems lie at the intersection of mathematics.
Advertisements

Partially Observable Markov Decision Process (POMDP)
Motion Planning for Point Robots CS 659 Kris Hauser.
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
(Includes references to Brian Clipp
IR Lab, 16th Oct 2007 Zeyn Saigol
DESIGN OF A GENERIC PATH PATH PLANNING SYSTEM AILAB Path Planning Workgroup.
Planning under Uncertainty
1 Last lecture  Configuration Space Free-Space and C-Space Obstacles Minkowski Sums.
Introduction to Robotics In the name of Allah. Introduction to Robotics o Leila Sharif o o Lecture #2: The Big.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Challenges Bayesian Estimation for Autonomous Object Manipulation Based on Tactile Perception Anna Petrovskaya, Oussama Khatib, Sebastian Thrun, Andrew.
Robotic Mapping: A Survey Sebastian Thrun, 2002 Presentation by David Black-Schaffer and Kristof Richmond.
Introduction to Robotics
Robotics CSPP Artificial Intelligence March 10, 2004.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
Robotics R&N: ch 25 based on material from Jean- Claude Latombe, Daphne Koller, Stuart Russell.
Part 2 of 3: Bayesian Network and Dynamic Bayesian Network.
Chapter 25: Robotics April 27, The Week Ahead … Wednesday: Dmitrii Zagorodnov Thursday: Jeff Elser’s presentation, general discussion Friday: Rafal.
Integrating POMDP and RL for a Two Layer Simulated Robot Architecture Presented by Alp Sardağ.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Introduction to mobile robots Slides modified from Maja Mataric’s CSCI445, USC.
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
Chapter 5: Path Planning Hadi Moradi. Motivation Need to choose a path for the end effector that avoids collisions and singularities Collisions are easy.
Behavior- Based Approaches Behavior- Based Approaches.
Robotics In which agents are endowed with physical effectors with which to do mischief.
Bayesian Filtering for Robot Localization
Mobile Robot controlled by Kalman Filter
Definition of an Industrial Robot
AIMA Introduction to Robotics Presented by Derek Colla [additions by Simon Levy]
Constraints-based Motion Planning for an Automatic, Flexible Laser Scanning Robotized Platform Th. Borangiu, A. Dogar, A. Dumitrache University Politehnica.
Markov Localization & Bayes Filtering
Robotics Introduction Robot Hardware Robotic Perception Planning to Move Dynamics and Control Robotic Software Applications.
© Manfred Huber Autonomous Robots Robot Path Planning.
Lecture 2: Introduction to Concepts in Robotics
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Robotics Chapter 5 – Path and Trajectory Planning
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
TKK | Automation Technology Laboratory Partially Observable Markov Decision Process (Chapter 15 & 16) José Luis Peralta.
Robotics Sharif In the name of Allah. Robotics Sharif Introduction to Robotics o Leila Sharif o o Lecture #2: The.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
The Hardware Design of the Humanoid Robot RO-PE and the Self-localization Algorithm in RoboCup Tian Bo Control and Mechatronics Lab Mechanical Engineering.
Mobile Robot Localization (ch. 7)
UNC Chapel Hill M. C. Lin Introduction to Motion Planning Applications Overview of the Problem Basics – Planning for Point Robot –Visibility Graphs –Roadmap.
Chapter 7. Learning through Imitation and Exploration: Towards Humanoid Robots that Learn from Humans in Creating Brain-like Intelligence. Course: Robots.
Probabilistic Robotics
Autonomous Robots Robot Path Planning (3) © Manfred Huber 2008.
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
1 Chapter 17 2 nd Part Making Complex Decisions --- Decision-theoretic Agent Design Xin Lu 11/04/2002.
Robotics Sharif In the name of Allah Robotics Sharif Introduction to Robotics o Leila Sharif o o Lecture #4: The.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
CSPP Artificial Intelligence March 10, 2004
CMSC Artificial Intelligence March 11, 2008
CS b659: Intelligent Robotics
Probabilistic Robotics
Cover Option2.
Markov ó Kalman Filter Localization
Mathematics & Path Planning for Autonomous Mobile Robots
Markov Decision Processes
Locomotion of Wheeled Robots
Markov Decision Processes
Subsuption Architecture
Probabilistic Map Based Localization
CS 416 Artificial Intelligence
Reinforcement Learning Dealing with Partial Observability
Robotics meet Computer Science
Behavior Based Systems
Presentation transcript:

AI Robotics Concepts Chapter 25

Content Tasks Effectors Sensors Agent Architectures Actions in continuous space

Robot Definition from the “Robot Institute of America” (fits a conveyor belt) A robot is a programmable, multifunction manipulator design to move material, parts, tools, or specific devices through variable programmed motions for the performance of a variety of tasks. Russels&Norvig definition (excludes softbots) A robot is an active, artificial agent whose environment is the physical world.

What is 'Artificial Intelligence'? What is 'Artificial Intelligence' (Jim Hendler, CNN, 1999)? –What computers cannot do yet E.g. NLP started as an AI field. Once successful (found objective mathematical model) became a field by itself. Getting out of AI (conventional wisdom): –Artificial Neural Networks are the 2 nd best solution to any problem –Genetic Algorithms are the 3 rd best solution to any problem –The (non-AI) mathematical solution, once found, is the best solution of any problem. AI is about generalization to unseen and unexpected situations. Once the situation is explicitly taken into account, we can no longer speak of AI. 1

The real world as an environment Inaccessible (sensors do not tell all) –need to maintain a model of the world Nondeterministic (wheels slip, parts break) –need to deal with uncertainty Nonepisodic (effects of an action change over time) –need to handle sequential decision problems Dynamic –need to know when to plan and when to reflex Continuous –cannot enumerate possible actions

Tasks Manufacturing and material handling –not autonomous: manufacturing, handling –simple machines are the best solution need accuracy, power, shapes put in standard cradles Gofer robots (with autonomy) –Mobile robots (mobots): couriers, under-water spies Hazardous environments –toxic or deep sea environment Telepresence and Virtual Reality –teleoperated robots for bomb scares in New York Augmentation of human abilities –artificial fingers, eyes, exoskeletons

Components Rigid Links Rigid Joints End Effectors (screw-drivers) –connected to “actuators” (convert electricity to movement) an actuator = a degree of freedom uses: locomotion, manipulation, processing

DOF Holonomic Robots (same # effective and controllable DOF) –a car has 3 effective but 2 controllable DOF robotic arms are holonomic mobile robots are not (more complex mechanics) Stanford manipulator 6DOF 5 revolute + 1 prismatic joint non-holonomic 4-wheeled vehicle

Distance sensors laser scanner & range scan Sensors –force, torque, tactile, range (sonar), image, odometry & shaft decoders (proprioceptive sensor), inertial, GPS

Locomotion Types: –statically stable (wheels, chenille, slow legs) –dynamically stable (legs, hopping) less practical for most robots (expensive) 4-legged “Big Dog” DFKI Bremen Marc Raibert

Perception Dynamic belief network –Update the belief net –filtering equations combine transition model / motion model sensor model

Localization Problems –Tracking –Global localization –Kidnapping problem Techniques –Probabilistic Prediction –Landmarks Models or Range Scan Models

Localization Monte Carlo Localization, (MCL, particle filtering) –starts with a uniform distribution of particles –computes/updates their probability –Generates another set of samples based on the new distribution Kalman filters –need local linearization, e.g. using Taylor expansion –Uncertainty grows until landmarks are spotted –uncertainty of landmarks => data association problem

Monte Carlo Localization

MCL Run initial global uncertainty bimodal uncertainty (symmetry) unimodal uncertainty

Mapping Simultaneous localization and mapping SLAM –similar to the localization (filtering) problem has the map in the posterior Often uses Kalman filters + Landmarks

Usual Mapping Other approaches are less probabilistic and more ad-hoc, and they represent a trend!

Planning to Move Two types of motion: –point-to-point (free motion) –compliant motion (touching a wall, a screw) Two representations of planning problems –configuration space –workspace Planning algorithms –cell decomposition –skeletonization.

Configuration Space The Workspace is the 3D world around. –typically a position is needed for each joint. The configuration Space is the n-ary world of possible robot configurations. –sometimes smaller size than the Workspace, as it integrates linkage constraints. occupied space vs. free space Sometimes both spaces are required: –Direct kinematics (configuration  Workspace) –Inverse kinematics (workspace  configuration)

Workspace vs Configuration Space Workspace of a robot with 2 DOF box with flat hanging obstacle Configuration space of same robot White regions are free of collisions The dot is for the current configuration

Robot configurations in workspace and configuration space.

Cell decomposition methods Continuous space is hard to work with Discretization uses cell decomposition –Grid Easy to scan with dynamic programming Difficult to handle “gray” boxes  –Incompleteness –unsoundness May need further subdivision  exponential in dimensions Difficult to prove that a box is fully available –Quad-trees –Exact cell decomposition (complex shapes) –Potential Field  optimization via value iteration

Same path in workspace coordinates Robot bends elbow to avoid collision. Value function and path for discrete cell approximation of the configuration space

Potential A repelling potential field pushes robot away from obstacles Path found by minimizing path length and the potential

Skeletonization methods Skeleton – a lower dimensional representation of the configuration space –Voronoi graph (points equally distant from two obstacles) –Probabilistic roadmap

Voronoi Voronoi graph is the set of points equidistant to the two or more obstacles in configuration space Probabilistic roadmap, 400 randomly chosen points in free space

Skeletonization methods Skeleton – a lower dimensional representation of the configuration space –Voronoi graph (points equally distant from two obstacles) Reduces the dimension of the path planning. Difficult to compute in the configuration space. Difficult for many dimensions. Leads to large/inefficient detours. –Probabilistic roadmap Random generation of many points in the configuration space (discarding occupied ones) Lines between close points, if they are in free space. –Distribution of points may be based on need and promise –Scales best

Planning Uncertain Movements Errors –from the stochastic model –From the approximation (particle filtering) algorithms Common simplification: –Assume the “most likely state” –“Works” when uncertainty is limited Solution –(with observable states) Markov Decision Processes Returns optimal policy (action in each state) –Called “Navigation function” –(with partially observable states) POMDPs Creates policies based on distributions of state probabilities –Not practical

Robust Methods Assumes bounded amount of uncertainty –An interval of possibilities –Works if the plan fails in that interval –“Conformant planning” is the state independent planning (Chap 12) Fine Motion Planning (FMP) –Assume motion in proximity of static objects –Based on sensor feedback, where the motion is too sensible for the robot Use “guarded motions” (guard is a predicate stating when to end motion, and they attach “compliant motions”, i.e which suffer slide if needed) –Constructing FMP uses: configuration-space, angle of uncertainty in directions, sensors Generates a set of steps guaranteeing success in the worst case May not be the most efficient.

Fine Motion Planning 2-dimensional environment, velocity uncertainty cone and envelope of possible robot movements.

Fine Motion Plan First motion. No matter the error, we know that the final configuration will be left of the hole 2 nd motion command. Even with error we get into the home

Moving Computing forces and inertia –Dynamic state = position + velocities –Relation kinematics-velocity via differential eq. Controller : keeps the robot on track –Reference controller  robot on reference path –Optimizing a cost function  Optimal controllers (e.g., MDP policies) –(Markov decision processes)

Control a)proportional control with gain factor 1. b)proportional control with gain 0.1 c)proportional derivative control: gain 0.3 for proportional and 0.8 for derivative Robot tries to follow the path shown in gray

Controllers Control that is proportional to displacement: P-controllers –a t =K P (y(t)-x t ) y(t) desired location K P – gain parameter Assume small perturbations –Stable controller: bounded error y(t)-x t. E.g. P-controllers –Strictly Stable controller: can return to reference path

P(I)D-Controllers Proportional Derivative controllers: –a t =K P (y(t)-x t )+K D *d(y(t)-x t )/dt Proportional Integrative Derivative Controllers: –a t =K P (y(t)-x t )+K I *∫(y(t)-x t )+K D *d(y(t)-x t )/dt Corrects systematic errors

Potential for Control Has many local minima. Does not depend on velocities The robot ascends a potential field composed of repelling forces asserted from obstacles and an attracting force to the goal configuration successful path local optimum

Reactive Control Reflex agent design = reactive control –For intractable many DOFs or too few sensors –Simple rules: 3 legs at a time, a simple control Emergent behavior (no explicit environment model) Genghis, a hexapod robot. An augmented finite state machine AFSM for the control of a single leg. If stuck is moved increasingly higher

Augmented Finite State Machines Finite States augmented with timers Timers enable state changes after preprogrammed periods of time AFSM lets behaviors override each other: –a suppression signal overrides the normal input signal –an inhibation signal causes output to be completely inhibated

Robotic Software Architectures Software Architecture = methodology for structuring algorithms –Combining reactive with deliberative control Reactive ctrl is sensor driven, for low level decisions Leads to hybrid architectures. Reactive architectures –Subsumption architecture (1986) Each layer’s goal subsumes that of underlying layers. –bottom-up design –explore  wander  avoid objects Composable augmented finite state machines (AFSM, see hexapod). –Assumes good sensors; focuses on one task; complex

Three-layer architecture Most popular hybrid architecture –Reactive layer (low-level control, milliseconds) –Executive layer: Selects which reactive control to invoke Following points proposed by deliberative ctrl –Deliberative layer (planning, minutes/cycle) Other possible layers –User Interface Layer –Inter-robot interface

Summary Chapters to read –11.4 GraphPlan –12. Planning and Acting in Real World –14. Bayesian Nets –15. (Filtering, HMM, Kalmann, DBN) –17 (Decision) 17.1 to 17.4