Lecture 2: Reactive Systems Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.

Slides:



Advertisements
Similar presentations
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Advertisements

Planning with Non-Deterministic Uncertainty (Where failure is not an option) R&N: Chap. 12, Sect (+ Chap. 10, Sect 10.7)
Lecture 8: Three-Level Architectures CS 344R: Robotics Benjamin Kuipers.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Plan Generation & Causal-Link Planning 1 José Luis Ambite.
5-1 Chapter 5: REACTIVE AND HYBRID ARCHITECTURES.
Lecture 4: Command and Behavior Fusion Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.
1 Classical STRIPS Planning Alan Fern * * Based in part on slides by Daniel Weld.
Embedded System Lab Kim Jong Hwi Chonbuk National University Introduction to Intelligent Robots.
Lecture 6: Hybrid Robot Control Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
A Summary of the Article “Intelligence Without Representation” by Rodney A. Brooks (1987) Presented by Dain Finn.
Robotics CSPP Artificial Intelligence March 10, 2004.
Topics: Introduction to Robotics CS 491/691(X) Lecture 11 Instructor: Monica Nicolescu.
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
Nov 14 th  Homework 4 due  Project 4 due 11/26.
IofT 1910 W Fall 2006 Week 5 Plan for today:  discuss questions asked in writeup  talk about approaches to building intelligence  talk about the lab.
Autonomous Mobile Robots CPE 470/670 Lecture 8 Instructor: Monica Nicolescu.
Integrating POMDP and RL for a Two Layer Simulated Robot Architecture Presented by Alp Sardağ.
A Robust Layered Control System for a Mobile Robot Rodney A. Brooks Presenter: Michael Vidal.
Universal Plans for Reactive Robots in Unpredictable Environments By M.J. Schoppers Presented by: Javier Martinez.
Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments Lecture 1: Basic Concepts Gal A. Kaminka
IROS04 (Japan, Sendai) University of Tehran Amir massoud Farahmand - Majid Nili Ahmadabadi Babak Najar Araabi {mnili,
1 Planning Chapters 11 and 12 Thanks: Professor Dan Weld, University of Washington.
Hardness Results for Problems P: Class of “easy to solve” problems Absolute hardness results Relative hardness results –Reduction technique.
Hardness Results for Problems
On Three-Layer Architecture Erann Gat Jet Propulsion Laboratory California Institute of Technology Presentation by: Ekkasit Tiamkaew Date: 09/09/04.
AI Principles, Lecture on Planning Planning Jeremy Wyatt.
Radial Basis Function Networks
PLANNING Partial order regression planning Temporal representation 1 Deductive planning in Logic Temporal representation 2.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
Robotica Lezione 1. Robotica - Lecture 12 Objectives - I General aspects of robotics –Situated Agents –Autonomous Vehicles –Dynamical Agents Implementing.
Mobile Robot Control Architectures “A Robust Layered Control System for a Mobile Robot” -- Brooks 1986 “On Three-Layer Architectures” -- Gat 1998? Presented.
Introduction to Behavior- Based Robotics Based on the book Behavior- Based Robotics by Ronald C. Arkin.
Introduction to AI Robotics Chapter 2. The Hierarchical Paradigm Hyeokjae Kwon.
(Classical) AI Planning. Some Examples Route search: Find a route between Lehigh University and the Naval Research Laboratory Project management: Construct.
Planning with Non-Deterministic Uncertainty. Recap Uncertainty is inherent in systems that act in the real world Last lecture: reacting to unmodeled disturbances.
Hybrid Behavior Co-evolution and Structure Learning in Behavior-based Systems Amir massoud Farahmand (a,b,c) (
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
Outline: Biological Metaphor Biological generalization How AI applied this Ramifications for HRI How the resulting AI architecture relates to automation.
Introduction to Robotics In the name of Allah. Introduction to Robotics o Leila Sharif o o Lecture #3: A Brief.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Black Box Testing Techniques Chapter 7. Black Box Testing Techniques Prepared by: Kris C. Calpotura, CoE, MSME, MIT  Introduction Introduction  Equivalence.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Robotica Lecture Review Reactive control Complete control space Action selection The subsumption architecture –Vertical vs. horizontal decomposition.
Brooks’ Subsumption Architecture EEL 6838 T. Ryan Fitz-Gibbon 1/24/2004.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 25 –Robotics Thursday –Robotics continued Home Work due next Tuesday –Ch. 13:
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Subsumption Architecture and Nouvelle AI Arpit Maheshwari Nihit Gupta Saransh Gupta Swapnil Srivastava.
Introduction to Artificial Intelligence Class 1 Planning & Search Henry Kautz Winter 2007.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Behavior-based Multirobot Architectures. Why Behavior Based Control for Multi-Robot Teams? Multi-Robot control naturally grew out of single robot control.
Intro to Planning Or, how to represent the planning problem in logic.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Reinforcement Learning AI – Week 22 Sub-symbolic AI Two: An Introduction to Reinforcement Learning Lee McCluskey, room 3/10
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Learning Procedural Knowledge through Observation -Michael van Lent, John E. Laird – 인터넷 기술 전공 022ITI02 성유진.
제 9 주. 응용 -4: Robotics Artificial Life and Real Robots R.A. Brooks, Proc. European Conference on Artificial Life, pp. 3~10, 1992 학습목표 시뮬레이션 로봇과 실제 로봇을.
Matt Loper / Brown University Presented for CS296-3 February 14th, 2007 On Three Layer Architectures (Erann Gat) On Three Layer Architectures (Erann Gat)
ECE 448 Lecture 4: Search Intro
Build Intelligence from the bottom up!
Build Intelligence from the bottom up!
Today: Classic & AI Control Wednesday: Image Processing/Vision
Intelligent Agents Chapter 2.
CIS 488/588 Bruce R. Maxim UM-Dearborn
Build Intelligence from the bottom up!
Subsuption Architecture
Behavior Based Systems
Presentation transcript:

Lecture 2: Reactive Systems Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments

© Gal Kaminka 2 Previously, on Robots … Robots are agents in physical or virtual environments Persistent, situated, responsive Environments have different characteristics Static/dynamic, accessible?, deterministic?, …. Since we are lazy, we want robots to do things for us Robots must consider task when deciding what to do Action-selection problem: What to do now in service of the task?

© Gal Kaminka 3 Physical Environments Dynamic Non-deterministic Inaccessible Continuous

© Gal Kaminka 4 A Crash Course in AI Planning Planning: An approach to action-selection problem Very long history, since the very beginning of AI 1971, seminal paper: STRIPS (Fikes and Nilsson) Still cited and taught today, despite much progress STRIPS originally developed for SRI robot, Shakey This is ironic Later on, STRIPS planning was rejected for robot control

© Gal Kaminka 5 AI Planning Definition: State { On(A,table), On(B,table), On(C,table), In-hand(nil) } ABC { On(A,B), On(B,table), On(C,table), In-hand(D) } A BC D

© Gal Kaminka 6 AI Planning Definition: Operators Operators change state of the world Preconditions, add-list, delete-list Pick-A(?x) Preconditions: On(A, ?x), In-hand(nil) Add: In-hand(A) Delete: In-hand(nil), On(A, ?x) Put-A(?y) Preconditions: In-hand(A), not On(?x, ?y) Add: On(A,?y), In-hand(nil) Delete: In-hand(A)

© Gal Kaminka 7 STRIPS Planning Given: Initial state of the world Operators Goal state Produce: Plan: Ordered list of instantiated operators Will change the world from initial state to goal state

© Gal Kaminka 8 Planning Example AB C Initial State A B C Goal State A B C After Pick-B(table) A B C After Put-B(A) A B C After Pick-C(table) After Put-C(B)

© Gal Kaminka 9 Planning on robots Sense initial state using sensors Create a full plan given goal state (given task) Feed plan, step-by-step to motors No need to sense again What’s wrong with this? (Hint: Think about Schoppers’ paper)

© Gal Kaminka 10 Deliberative Control Deliberative: Has internal state (typically a model of the world) Uses this internal state to make decisions Decisions made between alternatives Sense ModelThinkAct

© Gal Kaminka 11 When plans goes wrong Dynamic environment State changes even if no operator applied Non-deterministic State changes not according to operator specs Inaccessible Cannot sense entire state of the world Continuous Predicate-based description of world is discrete

© Gal Kaminka 12 Reactive control Reactive: No internal state Direct connection from sensors to actions S-R (stimulus response) systems No choices, no alternatives Sense Hard WiringAct

© Gal Kaminka 13 Universal Planning Have a plan ready for any possible contingency Scouts: Be prepared! From any initial state, know how to get to goal state Input: Operators, goal state Do not need to give initial state Output: Decision tree What operator to take, depending on environment state Not a single ordered list of operators

© Gal Kaminka 14 Universal planning algorithm A B C After Pick-B(table) A B C Goal State A B C After Pick-C(table) A B C After Put-B(A) AB C Initial State

© Gal Kaminka 15 Robot Control Algorithm Using Universal Planning Robot given task (goal, operators) Uses universal planner to create universal plan Robot senses environment Goal state reached? No: Execute operator according to decision tree Yes (keep persistency)

© Gal Kaminka 16 Advantages of Universal Planning Guaranteed to use optimal (shortest) plan to goal A very good thing! Optimal solution to action selection problem Robust to failures Robust in dynamic and non-deterministic domains

© Gal Kaminka 17 Problems with Universal Planning Assumes accessibility Assumes perfect sensors Assumes discrete actions (operators)

© Gal Kaminka 18 Universal plan as mapping sensors to actions Universal plan can be viewed as a function Sensor readings to actions u: S  A Essentially a table: For each state, give action Schoppers uses a decision-tree representation

© Gal Kaminka 19 Problems: Planning Time What is the planning time? Planning time grows with the number of states Since we have to enumerate operator for every state What is the number of states in an environment? Worst case: All possible combinations of sensor readings (state predicates)

© Gal Kaminka 20 Problems: Universal Plan Size Plan size grows with the number of possible states “Curse of dimensionality” X1 X2 Pick Put X1 X2 Pick Put X2 X1 Pick Put

© Gal Kaminka 21 Problems: Stupid executioner Schoppers: Baby goes around knocking blocks around? Ginsberg: What if baby repeatedly knocks down the same block? Universal plans may get into cycles This is because no deliberation is done Universal planner relies on simple executioner Sense, consult table, act Same as regular planner – except for sensing

© Gal Kaminka 22 שאלות?

© Gal Kaminka 23 Brooks’ Subsumption Architecture Multiple levels of control: Behaviors Avoid Object Wander Explore Map Monitor Change Identify objects Plan changes

© Gal Kaminka 24 Why does this work? It breaks the ideal universal plan into behaviors avoids the curse of dimensionality Behaviors (levels) interact to generate behavior Note that programmer is responsible for task- oriented design Goes both below and above universal plans Hand programmed: approximate plan Not automatically generated

© Gal Kaminka 25 Subsuming Layers How to make sure overall output is coherent? e.g., avoid object is in conflict with explore Subsumption hierarchy: Higher levels modify lower Avoid Object Wander Explore Map

© Gal Kaminka 26 Coherence using subsumption Key principle Higher layers control input/outputs of lower layers In practice: Can be difficult to design a lot depends on programmer Single module per layer can be restrictive Increasingly challenging to control at higher levels

© Gal Kaminka 27 Irony Ginsberg’s article pretty much killed universal planning Though occasional papers still published Reactive control very popular in practice But due to theory problem, no more automated planners! So we get lots of reactive plans, but no planning

© Gal Kaminka 28 Irony again Ginsberg was right: Approximating universal plan is possible Tends to be useful only in fairly low level locomotion control Approximation is what Brooks had done Which is why he often gets the credit for the revolution

© Gal Kaminka 29 Starting next week Behavior-Based Control: Expanding on Subsumption