Planning & Acting Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 12 Scheduling (12.1), Planning and Acting in nondetermininistic domains (12.3),

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Solving problems by searching
Planning with Non-Deterministic Uncertainty (Where failure is not an option) R&N: Chap. 12, Sect (+ Chap. 10, Sect 10.7)
Artificial Intelligence CS482, CS682, MW 1 – 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis,
CLASSICAL PLANNING What is planning ?  Planning is an AI approach to control  It is deliberation about actions  Key ideas  We have a model of the.
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Plan Generation & Causal-Link Planning 1 José Luis Ambite.
Feng Zhiyong Tianjin University Fall 2008 Planning and Acting in the Real World.
Search: Representation and General Search Procedure Jim Little UBC CS 322 – Search 1 September 10, 2014 Textbook § 3.0 –
CS 380: Artificial Intelligence Lecture #3 William Regli.
Brief notes and exercises on non- classical search These notes are not intended to make sense on their own. Please read them in conjunction with the textbook.
An Introduction to Artificial Intelligence CE Chapter 12 – Planning and Acting in Real World Ramin Halavati In which we.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Spring 2004.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Problem Solving What is AI way of solving problem?
Problem Solving What is AI way of solving problem?
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
Cooperating Intelligent Systems Intelligent Agents Chapter 2, AIMA.
GraphPlan ● (you don't have to know anything about this, but I put it here to point out that there are many clever approaches to try to find plans more.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Fall 2004.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Planning & Acting Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 13.
Planning in the Real World Time and Resources Hierarchical Task Network Conditional Planning Execution Monitoring and Replanning Continuous Planning MultiAgent.
Task analysis 1 © Copyright De Montfort University 1998 All Rights Reserved Task Analysis Preece et al Chapter 7.
Planning & Acting Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 12 Scheduling (12.1), Planning and Acting in nondetermininistic domains (12.3),
Transaction. A transaction is an event which occurs on the database. Generally a transaction reads a value from the database or writes a value to the.
1 Problem Solving and Searching CS 171/271 (Chapter 3) Some text and images in these slides were drawn from Russel & Norvig’s published material.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
1 Debugging and Testing Overview Defensive Programming The goal is to prevent failures Debugging The goal is to find cause of failures and fix it Testing.
Unit 14 Derivation of State Graphs
Homework 1 ( Written Portion )  Max : 75  Min : 38  Avg : 57.6  Median : 58 (77%)
Planning with Non-Deterministic Uncertainty. Recap Uncertainty is inherent in systems that act in the real world Last lecture: reacting to unmodeled disturbances.
COMP 208/214/215/216 Lecture 3 Planning. Planning is the key to a successful project It is doubly important when multiple people are involved Plans are.
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
Computing & Information Sciences Kansas State University Wednesday, 15 Oct 2008CIS 530 / 730: Artificial Intelligence Lecture 20 of 42 Wednesday, 15 October.
Computing & Information Sciences Kansas State University Wednesday, 25 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 26 of 42 Wednesday. 25 October.
1 Solving problems by searching 171, Class 2 Chapter 3.
Lecture 3: 18/4/1435 Searching for solutions. Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
Problem Solving Agents
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 3 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
CS 346 – Chapter 7 Deadlock –Properties –Analysis: directed graph Handle –Prevent –Avoid Safe states and the Banker’s algorithm –Detect –Recover.
1 CMSC 471 Fall 2004 Class #21 – Thursday, November 11.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
CPSC 7373: Artificial Intelligence Lecture 9: Planning Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
1 Solving problems by searching Chapter 3. 2 Outline Problem types Example problems Assumptions in Basic Search State Implementation Tree search Example.
Lecture 2: Problem Solving using State Space Representation CS 271: Fall, 2008.
Material Requirements Planning
Computing & Information Sciences Kansas State University Wednesday, 25 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 26 of 42 Wednesday. 25 October.
Solving problems by searching
ECE 448 Lecture 4: Search Intro
Problem Solving by Searching
Solving problems by searching
Planning José Luis Ambite.
Problem Solving and Searching
Problem Solving and Searching
Class #20 – Wednesday, November 5
EA C461 – Artificial Intelligence Problem Solving Agents
Exception Handling Imran Rashid CTO at ManiWeber Technologies.
Planning and Scheduling
Solving problems by searching
Class #17 – Tuesday, October 30
Solving problems by searching
Presentation transcript:

Planning & Acting Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 12 Scheduling (12.1), Planning and Acting in nondetermininistic domains (12.3), Conditional planning (12.4), Replanning (12.5), and Continuous planning (12.6).

CS 471/598 by H. Liu2 Time in planning - Scheduling Planning so far does not specify how long an action takes, when an action occurs, except to say that is before or after another action Job shop scheduling – time is essential An example of Fig 12.1 and 12.2 A partial order plan Critical path Slack = LS (latest start) – ES (earliest start) Schedule = plan + time (durations for actions) Scheduling with resource constraints When certain parts are not available, waiting time should be minimized.

CS 471/598 by H. Liu3 Some assumptions with planning The world is accessible, static, deterministic. Action descriptions are correct & complete with exact stated consequences. The real world is not that perfect. So, how can we handle partially accessible, dynamic, non-deterministic world with incomplete information? What do we usually do?

CS 471/598 by H. Liu4 Anticipating the possible contingencies Bounded indeterminacy unexpected effects can be enumerated Conditional POP can handle it Unbounded indeterminacy In complicated cases, no complete enumeration is possible Plan for some contingencies Replan for the rest

CS 471/598 by H. Liu5 Contingency Conditional planning Sensing actions Execution monitoring monitoring what is happening while it executes the plan telling when things go wrong Replanning finding a way to achieve its goals from the new situation (something went wrong according to old plan) Continuous planning Persist over lifetime (e.g., Mars rovers)

CS 471/598 by H. Liu6 Painting chair and table Init: a chair, a table, cans of paints with unknown color Goal: the chair and table have the same color Different planning Classic planning: fully observable? Sensorless planning: coercing Conditional planning with sensing: (1) already the same, (2) one painted with the available color, (3) paint both Replanning: paint, check the effect, replan for missing spot Continuous planning: paint, can stop for unexpected events, continue

CS 471/598 by H. Liu7 Conditional planning (1) CP in fully observable environments (FOE) Vacuum world with actions Left, Right, and Suck Disjunctive effects: Action (Left, Precond: AtR, Effect: AtL v AtR) Conditional effects: Action(Suck, Precond:, Effect: (when AtL: CleanL) ^ (when AtR: CleanR) Action (Left, Precond: AtR, Effect:AtL v (AtL^when CleanL: !ClearnL) Conditional steps for creating conditional plans if AtL ^ CleanL then Right else Suck The search tree for the vacuum world (Fig 12.9)

CS 471/598 by H. Liu8 Conditional planning (2) CP in partially observable environments (POE) Initial state is a state set – a belief state (Fig 3.21) Determine “both squares are clean” with local dirt sensing  the vacuum agent is AtR and knows about R, how about L?  Dirt can sometimes be left behind when the agent leaves a clean square A graph representation (Figure 12.12) How different between in FOE and in POE  One is just a special case of the other

CS 471/598 by H. Liu9 Sensing Automatic sensing At every step, the agent gets all the available percepts Active sensing Percepts are obtained only by executing specific sensory actions CheckDirt action

CS 471/598 by H. Liu10 Replanning via monitoring In reality, something can go wrong annotate a plan at each step with preconditions required for successful completion of the remaining steps detect a potential failure by comparing the current preconditions with the state description from percepts Sensing and monitoring Execution monitoring - see what happens when executing a plan  Action monitoring  Plan monitoring

CS 471/598 by H. Liu11 Replanning Action monitoring Before carrying out the next action of a plan check the preconditions of each action as it is executed rather than checking the preconditions of the entire remaining plan A schematic illustration (Fig 12.14) work well with realistic systems (action failures) Return to the chair-table painting problem (page 443) Plan: [Start; Open(BC); Paint(Table,Blue); Finish] What if it missed a spot of green on the table? Loop is created by plan-execute-replan Failure is only detected after an action is performed

CS 471/598 by H. Liu12 Plan monitoring Cut off execution of a doomed plan and don’t continue until the failure actually occurs Detect failure by checking the preconditions for success of the entire remaining plan Useful when a goal is serendipitously achieved.

CS 471/598 by H. Liu13 Difference between CP & RP Unpainted area will make the agent to repaint until the chair is fully painted. Is it different from the loop of repainting in conditional planning? The difference lies in the time at which the computation is done and the information is available to the computation process CP - anticipates uneven paint RP - monitors during execution

CS 471/598 by H. Liu14 Combining planning & execution Continuous planning agent execute some steps ready to be executed refine the plan to resolve standard deficiencies refine the plan with additional information fix the plan according to unexpected changes  recover from execution errors  remove steps that have been made redundant Goal ->Partial Plan->Some actions-> Monitoring the world -> New Goal

CS 471/598 by H. Liu15 Continuous Planning - Revisit the blocks world Goal: On(C,D)^On(D,B) Action(Move(x,y), Pre:Clear(x)^Clear(y)^On(x,z), Eff:On(x,y)^Clear(z)^!Clear(y)^!On(x,z)) Fig – 12.21

CS 471/598 by H. Liu16 Plan and execution Steps in execution: Ordering - Move(D,B), then Move(C,D) Another agent did Move(D,B) - change the plan Remove the redundant step Make a mistake, so On(C,A)  Still one open condition Planning one more time - Move(C,D) Final state: start -> finish

CS 471/598 by H. Liu17 CP and RP Conditional planning The number of possible conditions vs. the number of steps in the plan Only one set of conditions will occur Replanning Fix problems as they arise during execution Fragile plans due to replanning Intermediate planning between CP & RP The most likely ones done by CP The rest done by RP

CS 471/598 by H. Liu18 Coercion and abstraction Coercion - forcing the state with unknown into a known state to reduce uncertainty Paint Table and Chair together Abstraction - ignore details until it’s necessary, another tool for least commitment A travel case - Fly(Phoenix, NY) Aggregation - a form of abstraction, or summary Dealing with a large number of objects

CS 471/598 by H. Liu19 Summary The unexpected or unknown occurs In order to overcome that, we need CP or RP There exists incorrectness or incompleteness, we need to monitor the result of planning: execution or action monitoring CP and RP are different and have different strengths Reducing uncertainty via coercion, abstraction and aggregation