Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Plan Generation & Causal-Link Planning 1 José Luis Ambite.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CMSC 471 Spring 2014 Class #4 Thu 2/6/14 Uninformed Search Professor Marie desJardins,
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.4) January, 14, 2009.
Search Strategies Reading: Russell’s Chapter 3 1.
Artificial Intelligence II S. Russell and P. Norvig Artificial Intelligence: A Modern Approach Chapter 11: Planning.
1 Chapter 4 State-Space Planning. 2 Motivation Nearly all planning procedures are search procedures Different planning procedures have different search.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
CPSC 322 Introduction to Artificial Intelligence November 19, 2004.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Review: Search problem formulation
Artificial Intelligence Chapter 11: Planning
Planning Russell and Norvig: Chapter 11. Planning Agent environment agent ? sensors actuators A1A2A3.
CPSC 322, Lecture 12Slide 1 CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12 (Textbook Chpt ) January, 29, 2010.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Intro to AI Fall 2002 © L. Joskowicz 1 Introduction to Artificial Intelligence LECTURE 12: Planning Motivation Search, theorem proving, and planning Situation.
Classical Planning via State-space search COMP3431 Malcolm Ryan.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
State-Space Planning Sources: Ch. 3 Appendix A
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
1 Plan-Space Planning Dr. Héctor Muñoz-Avila Sources: Ch. 5 Appendix A Slides from Dana Nau’s lecture.
1 07. The planning problem 2  Inputs: 1. A description of the world state 2. The goal state description 3. A set of actions  Output: A sequence of actions.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Complexity of Classical Planning Megan Smith Lehigh University CSE 497, Spring 2007.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l all learning algorithms,
Informed (Heuristic) Search
Lecture 7: Uninformed Search Dr John Levine Algorithms and Complexity February 13th 2006.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
For Monday Read chapter 4, section 1 No homework..
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
For Friday No reading Homework: –Chapter 11, exercise 4.
Lecture 3: Uninformed Search
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Automated Planning Dr. Héctor Muñoz-Avila. What is Planning? Classical Definition Domain Independent: symbolic descriptions of the problems and the domain.
Advanced Artificial Intelligence Lecture 2: Search.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
(Classical) AI Planning. General-Purpose Planning: State & Goals Initial state: (on A Table) (on C A) (on B Table) (clear B) (clear C) Goals: (on C Table)
1. 2 Outline of Ch 4 Best-first search Greedy best-first search A * search Heuristics Functions Local search algorithms Hill-climbing search Simulated.
Planning I: Total Order Planners Sections
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Solving problems by searching A I C h a p t e r 3.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
CS621: Artificial Intelligence Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay Lecture 19: Hidden Markov Models.
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.5) Sept, 13, 2013.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
Causal-link planning 1 Jim Blythe. 2 USC INFORMATION SCIENCES INSTITUTE Causal Link Planning The planning problem Inputs: 1. A description of the world.
Lecture 2: Problem Solving using State Space Representation CS 271: Fall, 2008.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Introduction Contents Sungwook Yoon, Postdoctoral Research Associate
AI Planning.
Planning José Luis Ambite.
Chapter 2 Representations for Classical Planning
Presentation transcript:

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 1 Chapter 4 State-Space Planning Dana S. Nau CMSC 722, AI Planning University of Maryland, Fall 2004 Lecture slides for Automated Planning: Theory and Practice

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 2 Motivation l Nearly all planning procedures are search procedures l Different planning procedures have different search spaces u Two examples: l State-space planning u Each node represents a state of the world »A plan is a path through the space l Plan-space planning u Each node is a set of partially-instantiated operators, plus some constraints »Impose more and more constraints, until we get a plan

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 3 Outline l State-space planning u Forward search u Backward search u Lifting u STRIPS u Block-stacking

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 4 Forward Search take c3 move r1 take c2 … …

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 5 Properties Forward-search is sound u for any plan returned by any of its nondeterministic traces, this plan is guaranteed to be a solution Forward-search also is complete  if a solution exists then at least one of Forward-search ’s nondeterministic traces will return a solution.

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 6 Deterministic Implementations l Some deterministic implementations of forward search: u breadth-first search u best-first search u depth-first search u greedy search l Breadth-first and best-first search are sound and complete u But they usually aren’t practical because they require too much memory u Memory requirement is exponential in the length of the solution l In practice, more likely to use a depth-first search or greedy search u Worst-case memory requirement is linear in the length of the solution u Sound but not complete »But classical planning has only finitely many states »Thus, can make depth-first search complete by doing loop-checking s0s0 s1s1 s2s2 s3s3 a1a1 a2a2 a3a3 s4s4 s5s5 sgsg a4a4 a5a5 …

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 7 Branching Factor of Forward Search l Forward search can have a very large branching factor (see example) l Why this is bad: u Deterministic implementations can waste time trying lots of irrelevant actions l Need a good heuristic function and/or pruning procedure u See Section 4.5 (Domain-Specific State-Space Planning) and Part III (Heuristics and Control Strategies) a3a3 a1a1 a2a2 … a1a1 a2a2 a 50 a3a3 initial state goal

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 8 Backward Search l For forward search, we started at the initial state and computed state transitions u new state =  (s,a) l For backward search, we start at the goal and compute inverse state transitions u new set of subgoals =  -1 (g,a)

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 9 Inverse State Transitions l What do we mean by  -1 (g,a)? l First need to define relevance: u An action a is relevant for a goal g if »a makes at least one of g’s literals true g  effects(a) ≠  »a does not make any of g’s literals false g +  effects – (a) =  g –  effects + (a) =  l If a is relevant for g, then u  -1 (g,a) = (g – effects(a))  precond(a)

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 10 g0g0 g1g1 g2g2 g3g3 a1a1 a2a2 a3a3 g4g4 g5g5 s0s0 a4a4 a5a5

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 11 Efficiency of Backward Search l Backward search’s branching factor is small in our example l There are cases where it can still be very large u Many more operator instances than needed a3a3 a1a1 a2a2 … a1a1 a2a2 a 50 a3a3 initial state goal q(a) foo(x,y) precond: p(x,y) effects: q(x) foo(a,a) foo(a,b) foo(a,c) … p(a,a) p(a,b) p(a,c)

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 12 Lifting l Can reduce the branching factor if we partially instantiate the operators u this is called lifting q(a) foo(a,y) p(a,y) q(a) foo(x,y) precond: p(x,y) effects: q(x) foo(a,a) foo(a,b) foo(a,c) … p(a,a) p(a,b) p(a,c)

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 13 Lifted Backward Search More complicated than Backward-search u Have to keep track of what substitutions were performed l But it has a much smaller branching factor

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 14 The Search Space is Still Too Large Lifted-backward-search generates a smaller search space than Backward-search, but it still can be quite large u If some subproblems are independent and something else causes problems elsewhere, we’ll try all possible orderings before realizing there is no solution u More about this in Chapter 5 (Plan-Space Planning) a b c b a b a b a c b c a c b goal

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 15 Other Ways to Reduce the Search l Search-control strategies u I’ll say a lot about this later »Part III of the book u For now, just two examples »STRIPS »Block stacking

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 16 STRIPS l π  the empty plan l do a modified backward search from g u instead of  -1 (s,a), each new set of subgoals is just precond(a) u whenever you find an action that’s executable in the current state, then go forward on the current search path as far as possible, executing actions and appending them to π u repeat until all goals are satisfied g g1g1 g2g2 g3g3 a1a1 a2a2 a3a3 g4g4 g5g5 g3g3 a4a4 a5a5 current search path a6a6 π =  a 6, a 4  s =  (  (s 0,a 6 ),a 4 ) g6g6 a3a3 satisfied in s 0

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 17 unstack(x,y) Pre: on(x,y), clear(x), handempty Eff: ~on(x,y), ~clear(x), ~handempty, holding(x), clear(y) stack(x,y) Pre: holding(x), clear(y) Eff: ~holding(x), ~clear(y), on(x,y), clear(x), handempty pickup(x) Pre: ontable(x), clear(x), handempty Eff: ~ontable(x), ~clear(x), ~handempty, holding(x) putdown(x) Pre: holding(x) Eff: ~holding(x), ontable(x), clear(?x), handempty Quick Review of Blocks World c ab c ab c ab c a b c ab

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 18 The Sussman Anomaly Initial stategoal l On this problem, STRIPS can’t produce an irredundant solution u Try it and see c ab c a b

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 19 The Register Assignment Problem l State-variable formulation: Initial state:{value(r1)=3, value(r2)=5, value(r3)=0} Goal:{value(r1)=5, value(r2)=3} Operator:assign(r,v,r’,v’) precond: value(r)=v, value(r’)=v’ effects: value(r)=v’ l STRIPS cannot solve this problem at all

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 20 How to Fix? l Several ways: u Do something other than state-space search »e.g., Chapters 5–8 u Use forward or backward state-space search, with domain- specific knowledge to prune the search space »Can solve both problems quite easily this way »Example: block stacking using forward search

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 21 Domain-Specific Knowledge l A blocks-world planning problem P = (O,s 0,g) is solvable if s 0 and g satisfy some simple consistency conditions »g should not mention any blocks not mentioned in s 0 »a block cannot be on two other blocks at once »etc. Can check these in time O(n log n) l If P is solvable, can easily construct a solution of length O(2m), where m is the number of blocks u Move all blocks to the table, then build up stacks from the bottom »Can do this in time O(n) l With additional domain-specific knowledge can do even better …

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 22 Additional Domain-Specific Knowledge l A block x needs to be moved if any of the following is true:  s contains ontable (x) and g contains on (x,y)  s contains on (x,y) and g contains ontable (x)  s contains on (x,y) and g contains on (x,z) for some y≠z  s contains on (x,y) and y needs to be moved initial state goal e d d ba c c a b

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 23 Domain-Specific Algorithm loop if there is a clear block x such that x needs to be moved and x can be moved to a place where it won’t need to be moved then move x to that place else if there is a clear block x such that x needs to be moved then move x to the table else if the goal is satisfied then return the plan else return failure repeat initial state goal e d d ba c c a b

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 24 Easily Solves the Sussman Anomaly loop if there is a clear block x such that x needs to be moved and x can be moved to a place where it won’t need to be moved then move x to that place else if there is a clear block x such that x needs to be moved then move x to the table else if the goal is satisfied then return the plan else return failure repeat initial state goal ba c c a b

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: 25 Properties l The block-stacking algorithm: u Sound, complete, guaranteed to terminate u Runs in time O(n 3 ) »Can be modified to run in time O(n) u Often finds optimal (shortest) solutions u But sometimes only near-optimal (Exercise 4.22 in the book) »Recall that PLAN LENGTH is NP-complete