Planning & Acting Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 12 Scheduling (12.1), Planning and Acting in nondetermininistic domains (12.3),

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Solving problems by searching
Lectures on File Management
Planning with Non-Deterministic Uncertainty (Where failure is not an option) R&N: Chap. 12, Sect (+ Chap. 10, Sect 10.7)
Artificial Intelligence CS482, CS682, MW 1 – 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis,
CLASSICAL PLANNING What is planning ?  Planning is an AI approach to control  It is deliberation about actions  Key ideas  We have a model of the.
Classical Planning via Plan-space search COMP3431 Malcolm Ryan.
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Plan Generation & Causal-Link Planning 1 José Luis Ambite.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 21 More About Tests and Intervals.
Feng Zhiyong Tianjin University Fall 2008 Planning and Acting in the Real World.
Planning CSE 473 Chapters 10.3 and 11. © D. Weld, D. Fox 2 Planning Given a logical description of the initial situation, a logical description of the.
Search: Representation and General Search Procedure Jim Little UBC CS 322 – Search 1 September 10, 2014 Textbook § 3.0 –
Brief notes and exercises on non- classical search These notes are not intended to make sense on their own. Please read them in conjunction with the textbook.
An Introduction to Artificial Intelligence CE Chapter 12 – Planning and Acting in Real World Ramin Halavati In which we.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Spring 2004.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
Search: Representation and General Search Procedure CPSC 322 – Search 1 January 12, 2011 Textbook § 3.0 –
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Problem Solving What is AI way of solving problem?
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
GraphPlan ● (you don't have to know anything about this, but I put it here to point out that there are many clever approaches to try to find plans more.
Planning CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanning Supervised.
Handling non-determinism and incompleteness. Problems, Solutions, Success Measures: 3 orthogonal dimensions  Incompleteness in the initial state  Un.
Planning & Acting Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 12 Scheduling (12.1), Planning and Acting in nondetermininistic domains (12.3),
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Planning & Acting Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 13.
Planning in the Real World Time and Resources Hierarchical Task Network Conditional Planning Execution Monitoring and Replanning Continuous Planning MultiAgent.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
1 Debugging and Testing Overview Defensive Programming The goal is to prevent failures Debugging The goal is to find cause of failures and fix it Testing.
Unit 14 Derivation of State Graphs
Homework 1 ( Written Portion )  Max : 75  Min : 38  Avg : 57.6  Median : 58 (77%)
Planning with Non-Deterministic Uncertainty. Recap Uncertainty is inherent in systems that act in the real world Last lecture: reacting to unmodeled disturbances.
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
Computing & Information Sciences Kansas State University Wednesday, 15 Oct 2008CIS 530 / 730: Artificial Intelligence Lecture 20 of 42 Wednesday, 15 October.
STEP 4 Manage Delivery. Role of Project Manager At this stage, you as a project manager should clearly understand why you are doing this project. Also.
1 Solving problems by searching 171, Class 2 Chapter 3.
Lecture 3: 18/4/1435 Searching for solutions. Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Problem Solving Agents
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
1 CMSC 471 Fall 2004 Class #21 – Thursday, November 11.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
CPSC 7373: Artificial Intelligence Lecture 9: Planning Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Solving problems by searching A I C h a p t e r 3.
1 Solving problems by searching Chapter 3. 2 Outline Problem types Example problems Assumptions in Basic Search State Implementation Tree search Example.
Lecture 2: Problem Solving using State Space Representation CS 271: Fall, 2008.
Material Requirements Planning
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 21 More About Tests and Intervals.
Solving problems by searching Chapter 3. Types of agents Reflex agent Consider how the world IS Choose action based on current percept Do not consider.
Solving problems by searching
ECE 448 Lecture 4: Search Intro
Problem Solving by Searching
Problem Solving as Search
Markov ó Kalman Filter Localization
Solving problems by searching
Planning José Luis Ambite.
Problem Solving and Searching
Planning CSE 573 A handful of GENERAL SEARCH TECHNIQUES lie at the heart of practically all work in AI We will encounter the SAME PRINCIPLES again and.
Problem Solving and Searching
Class #20 – Wednesday, November 5
CSE (c) S. Tanimoto, 2001 Search-Introduction
Solving problems by searching
Class #17 – Tuesday, October 30
Solving problems by searching
Presentation transcript:

Planning & Acting Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 12 Scheduling (12.1), Planning and Acting in nondetermininistic domains (12.3), Conditional planning (12.4), Replanning (12.5), and Continuous planning (12.6).

CSE 471/598 by H. Liu2 Time in planning - Scheduling Planning so far does not specify how long an action takes, when an action occurs, except to say that is before or after another action When used in real world, such as scheduling Hubble Space Telescope observations, time is also a resource/constraint Job shop scheduling – time is essential An example of Figures 12.1 and 12.2 A partial order plan (with durations) Critical path (or the weakest link) Slack = LS (latest start) – ES (earliest start) Schedule = plan + time (durations for actions) Scheduling with resource constraints When certain parts are not available, waiting time should be minimized Difference in completing which first and possible change in Fig 12.4

CSE 471/598 by H. Liu3 Some assumptions with planning The world is accessible, static, deterministic. Action descriptions are correct & complete with exact stated consequences. However, the real world is not that perfect. So, how can we handle partially accessible, dynamic, non-deterministic world with incomplete information? What do we usually do?

CSE 471/598 by H. Liu4 Anticipating the possible contingencies To deal with incorrect, incomplete information Bounded indeterminacy unexpected effects can be enumerated Conditional POP can handle it Unbounded indeterminacy In complicated cases, no complete enumeration is possible Plan for some contingencies Replan for the rest

CSE 471/598 by H. Liu5 Non-Classical Planning Contingency Conditional planning Sensing actions Execution monitoring monitoring what is happening while it executes the plan telling when things go wrong Replanning finding a way to achieve its goals from the new situation (something went wrong according to old plan) Continuous planning Persist over lifetime (e.g., Mars rovers)

CSE 471/598 by H. Liu6 Painting chair and table Init: a chair, a table, cans of paints with unknown color Goal: the chair and table have the same color Different types of planning Classic planning: fully observable? Sensorless planning: coercing Conditional planning with sensing: (1) already the same, (2) one painted with the available color, (3) paint both Replanning: paint, check the effect, replan for missing spot Continuous planning: paint, can stop for unexpected events, continue

CSE 471/598 by H. Liu7 Conditional planning (1) CP in fully observable environments (FOE) Vacuum world with actions Left, Right, and Suck Disjunctive effects: if Left sometime fails, then Action (Left, Precond: AtR, Effect: AtL v AtR) Conditional effects: Action(Suck, Precond:, Effect: (when AtL: CleanL) ^ (when AtR: CleanR) Action (Left, Precond: AtR, Effect:AtL v (AtL^when CleanL: !ClearnL) Conditional steps for creating conditional plans: if test then planA else planB e.g., if AtL ^ CleanL then Right else Suck The search tree for the vacuum world (Fig 12.9)  State nodes (squares) and chance nodes (circles)

CSE 471/598 by H. Liu8 Conditional planning (2) CP in partially observable environments (POE) Initial state is a state set – a belief state (Fig 3.21, p85) Determine “both squares are clean” with local dirt sensing  the vacuum agent is AtR and knows about R, how about L?  Dirt can sometimes be left behind when the agent leaves a clean square A graph representation (Figure 12.12, p438) How different between in FOE and in POE  Which one is a special case of the other?

CSE 471/598 by H. Liu9 Sensing Automatic sensing At every step, the agent gets all the available percepts Active sensing Percepts are obtained only by executing specific sensory actions  Precond and when conditions are plain propositions, not knowledge propositions K(P) is defined as “knows that P is true”, !K as not knows; what does (12.2, p440) mean? CheckDirt (12.3, p440), CheckLocation actions

CSE 471/598 by H. Liu10 Replanning via monitoring In reality, something can go wrong. How can a replanning agent know that? 1. annotate a plan at each step with preconditions required for successful completion of the remaining steps 2. detect a potential failure by comparing the current preconditions with the state description from percepts Sensing and monitoring Execution monitoring - see what happens when executing a plan  Action monitoring  Plan monitoring

CSE 471/598 by H. Liu11 Replanning Action monitoring Before carrying out the next action of a plan check the preconditions of each action as it is executed rather than checking the preconditions of the entire remaining plan A schematic illustration (Fig 12.14) work well with realistic systems (action failures) Return to the chair-table painting problem (page 443) Plan: [Start; Open(BC); Paint(Table,Blue); Finish]  What if it missed a spot of green on the table? Loop is created by plan-execute-replan, or no explicit loop Failure is only detected after an action is performed

CSE 471/598 by H. Liu12 Plan monitoring Detect failure by checking the preconditions for success of the entire remaining plan Useful when a goal is serendipitously achieved  While you’re painting the chair, someone comes painting the table with the same color Cut off execution of a doomed plan and don’t continue until the failure actually occurs  While you’re painting the chair, someone comes painting the table with a different color If one insists on checking every precondition, it might never get around to actually doing anything Why?

CSE 471/598 by H. Liu13 Difference between CP & RP Unpainted area will make the agent to repaint until the chair is fully painted. Is it different from the loop of repainting in conditional planning? The difference lies in the time at which the computation is done and the information is available to the computation process CP - anticipates uneven paint RP - monitors during execution

CSE 471/598 by H. Liu14 Combining planning & execution Continuous planning agent execute some steps ready to be executed refine the plan to resolve standard deficiencies refine the plan with additional information fix the plan according to unexpected changes  recover from execution errors  remove steps that have been made redundant Goal ->Partial Plan->Some actions-> Monitoring the world -> New Goal

CSE 471/598 by H. Liu15 Continuous Planning - Revisit the blocks world Goal: On(C,D)^On(D,B) Action(Move(x,y), Pre:Clear(x)^Clear(y)^On(x,z), Eff:On(x,y)^Clear(z)^!Clear(y)^!On(x,z)) Fig – Start is used as the label for the current state.

CSE 471/598 by H. Liu16 Plan and execution Steps in execution: Ordering - Move(D,B), then Move(C,D) Another agent did Move(D,B) - change the plan Remove the redundant step Make a mistake, so On(C,A)  Still one open condition Planning one more time - Move(C,D) Final state: start -> finish

CSE 471/598 by H. Liu17 Conditional Planning and Replanning Conditional planning The number of possible conditions vs. the number of steps in the plan Only one set of conditions will occur Replanning Fix problems as they arise during execution Fragile plans due to replanning Intermediate planning between CP & RP The most likely ones done by CP The rest done by RP

CSE 471/598 by H. Liu18 Some general methods to deal with uncertainties: Coercion and abstraction Coercion - forcing the state with unknown into a known state to reduce uncertainty Paint Table and Chair together How about the job interview problem Abstraction - ignore details until it’s necessary, another tool for least commitment A travel case - Fly(Phoenix, NY)  After arrival, look for accommodation Aggregation - a form of abstraction, or summary Dealing with a large number of objects

CSE 471/598 by H. Liu19 Summary The unexpected or unknown occurs In order to overcome that, we need CP or RP There exists incorrectness or incompleteness, we need to monitor the result of planning: execution or action monitoring CP and RP are different and have different strengths Reducing uncertainty via coercion, abstraction and aggregation