Classical Planning via State-space search COMP3431 Malcolm Ryan.

Slides:



Advertisements
Similar presentations
REVIEW : Planning To make your thinking more concrete, use a real problem to ground your discussion. –Develop a plan for a person who is getting out of.
Advertisements

Language for planning problems
CSE391 – 2005 NLP 1 Planning The Planning problem Planning with State-space search.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
CLASSICAL PLANNING What is planning ?  Planning is an AI approach to control  It is deliberation about actions  Key ideas  We have a model of the.
MBD and CSP Meir Kalech Partially based on slides of Jia You and Brian Williams.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
PLANNING Ivan Bratko Acknowledgement: Some of these slides were adapted from D. Nau’s course on planning.
1 Graphplan José Luis Ambite * [* based in part on slides by Jim Blythe and Dan Weld]
Classical Planning via Plan-space search COMP3431 Malcolm Ryan.
Time Constraints in Planning Sudhan Kanitkar
Plan Generation & Causal-Link Planning 1 José Luis Ambite.
Planning Graphs * Based on slides by Alan Fern, Berthe Choueiry and Sungwook Yoon.
Planning CSE 473 Chapters 10.3 and 11. © D. Weld, D. Fox 2 Planning Given a logical description of the initial situation, a logical description of the.
1 Classical STRIPS Planning Alan Fern * * Based in part on slides by Daniel Weld.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Artificial Intelligence 2005/06
Classical Planning via State-space search COMP3431 Malcolm Ryan.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Artificial Intelligence Chapter 11: Planning
Planning CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanning Supervised.
Artificial Intelligence 2005/06 Planning: STRIPS.
Planning II CSE 473. © Daniel S. Weld 2 Logistics Tournament! PS3 – later today Non programming exercises Programming component: (mini project) SPAM detection.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Automated Planning and HTNs Planning – A brief intro Planning – A brief intro Classical Planning – The STRIPS Language Classical Planning – The STRIPS.
1 Planning Chapters 11 and 12 Thanks: Professor Dan Weld, University of Washington.
AI Principles, Lecture on Planning Planning Jeremy Wyatt.
Planning II CSE 573. © Daniel S. Weld 2 Logistics Reading for Wed Ch 18 thru 18.3 Office Hours No Office Hour Today.
Classical Planning Chapter 10.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
22/11/04 AIPP Lecture 16: More Planning and Operators1 More Planning Artificial Intelligence Programming in Prolog.
Artificial Intelligence Chapter 22 Planning Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
For Friday No reading Homework: –Chapter 11, exercise 4.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
Automated Planning and Decision Making Prof. Ronen Brafman Automated Planning and Decision Making Partial Order Planning Based on slides by: Carmel.
1 Chapter 16 Planning Methods. 2 Chapter 16 Contents (1) l STRIPS l STRIPS Implementation l Partial Order Planning l The Principle of Least Commitment.
Automated Planning Dr. Héctor Muñoz-Avila. What is Planning? Classical Definition Domain Independent: symbolic descriptions of the problems and the domain.
Introduction to Planning Dr. Shazzad Hosain Department of EECS North South Universtiy
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Introduction to Artificial Intelligence Class 1 Planning & Search Henry Kautz Winter 2007.
1/16 Planning Chapter 11- Part1 Author: Vali Derhami.
Intro to Planning Or, how to represent the planning problem in logic.
Classical Planning Chapter 10 Mausam / Andrey Kolobov (Based on slides of Dan Weld, Marie desJardins)
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
1 CMSC 471 Fall 2004 Class #21 – Thursday, November 11.
Planning I: Total Order Planners Sections
Automated Planning and Decision Making Prof. Ronen Brafman Automated Planning and Decision Making Graphplan Based on slides by: Ambite, Blyth and.
Graphplan CSE 574 April 4, 2003 Dan Weld. Schedule BASICS Intro Graphplan SATplan State-space Refinement SPEEDUP EBL & DDB Heuristic Gen TEMPORAL Partial-O.
Planning in FOL Systems sequences of actions to achieve goals.
Consider the task get milk, bananas, and a cordless drill.
Heuristic Search Planners. 2 USC INFORMATION SCIENCES INSTITUTE Planning as heuristic search Use standard search techniques, e.g. A*, best-first, hill-climbing.
CLASSICAL PLANNING. Outline  The challenges in planning with standard search algorithm  Representing Plans – the PDDL language  Planning as state -
Planning as Search State Space Plan Space Algorihtm Progression
Classical Planning via State-space search
Introduction Contents Sungwook Yoon, Postdoctoral Research Associate
Consider the task get milk, bananas, and a cordless drill
Class #17 – Thursday, October 27
Planning José Luis Ambite.
Graphplan/ SATPlan Chapter
Planning CSE 573 A handful of GENERAL SEARCH TECHNIQUES lie at the heart of practically all work in AI We will encounter the SAME PRINCIPLES again and.
Class #19 – Monday, November 3
Class #20 – Wednesday, November 5
Graphplan/ SATPlan Chapter
Graphplan/ SATPlan Chapter
GraphPlan Jim Blythe.
[* based in part on slides by Jim Blythe and Dan Weld]
Presentation transcript:

Classical Planning via State-space search COMP3431 Malcolm Ryan

What is planning? Planning is the AI approach to control Deliberation about action Key ideas –We have a model of the world –Model describes states and actions –Give the planner a goal and it outputs a plan –Aim for domain independence Planning is search

Some History 1961 GPS, Newell and Simon State-space search, means-ends analysis 1971 STRIPS, Fikes and Nilsson Introduced STRIPS notation for actions 1975 NOAH, Sacerdoti NONLIN, Tate First plan-space search planners 1989 ADL, Pednault An extension of the STRIPS action notation

Further History 1991 SNLP, Soderland and Weld based on McAllester and Rosenblitt Plan-space search make easy 1995 SATPlan, Kautz and Selman Planning as a satisfiability problem 1995 GraphPlan, Blum and Furst A return to state-space search, using planning graphs PDDL, McDermott et al Extending ADL to include… ?

Classical planning “Classical planning” is the name given to early planning systems (before about 1995) Most of these systems are based on the Fikes & Nilsson’s STRIPS notation for actions Includes both “state-space” and “plan- space” planning algorithms.

The Model Planning is performed based on a given model of the world. A model  includes: –A set of states, S –A set of actions, A –A transition function,  : S x A  S

Restrictions on the Model 1.S is finite 2.  is fully observable 3.  is deterministic 4.  is static (no external events) 5.S has a factored representation 6.Goals are restricted to reachability. 7.Plans are ordered sequences of actions 8.Actions have no duration 9.Planning is done offline

Example: Blocks World red blue green table

Example: Blocks World S = the set of all different configurations of the blocks A = the set of “move” actions  describes the outcomes of actions move(red,blue,green)

States, actions and goals States, actions and goals are described in the language of symbolic logic. Predicates denote particular features of the world: Eg, in the blocks world: –on(block1, block2) –on_table(block) –clear(block)

Representing States States are described by conjunctions of ground predicates (possibly negated). Eg: on(blue, red)   on(green, red) The closed world assumption (CWA) is employed to remove negative literals: on(blue, red) The state description is complete.

Representing Goals The goal is the specification of the task A goal is a usually conjunction of predicates: on(red, green)  on_table(green) The CWA does not apply. So the above goal could be satisfied by: on(red, green)  on_table(green)  on(blue, red)  clear(blue)  …

Representing Actions Actions are described in terms of preconditions and effects. Preconditions are predicates that must be true before the action can be applied. Effects are predicates that are made true (or false) after the action has executed. Sets of similar actions can be expressed as a schema.

STRIPS operators An early but still widely used form of action description is as “STRIPS operators”. Three parts: PreconditionA conjunction of predicates Add-list The set of predicates made true Delete-list The set of predicates made false

Blocks World Action Schema move(block, from, to) Pre: on(block, from), clear(block), clear(to) Add: on(block, to), clear(from) Del: on(block, from), clear(to)

Blocks World Actions Note that this action schema defines many actions: move(red, blue, green) move(red, green, blue) etc… We also need to define: move_to_table(block, from) move_from_table(block, to)

Representing Plans A plan is simply a sequence of actions.eg  = move_from_table(red, blue), move(red, blue, green), move_to_table(red, green) We require that every action in the sequence is applicable, i.e. its precondition is true before it is executed.

Reasoning with STRIPS An action a is applicable in state s if its precondtion is satisfied, ie: pre + (a)  s pre - (a)  s =  The result of executing a in s is given by:  (s,a) = (s – del(a))  add(a) This is called progressing s through a

Progression example Taking the earlier example: s = on(red, blue), on_table(blue), clear(red), on_table(green), clear(green) a = move(red, blue, green) move(red,blue,green)

Progression example 1.Check action is applicable: on(red, blue), clear(red), clear(blue) 2.Delete predicates from delete-list: on(red, blue), on_table(blue), clear(red), on_table(green), clear(green) 3.Add predicates from add-list: on_table(blue), clear(red), on_table(green), on(red, green), clear(blue)

Progression example 2 Consider instead the action a = move_from_table(blue, green) This has precondition: pre(a) = on_table(blue), clear(blue), clear(green) This action cannot be executed as clear(blue) is not in s. i.e. it is not applicable

Reasoning with STRIPS We can also regress states. If we want to achieve goal g, using action a, what needs to be true beforehand? An action a is relevant for g, if: g  add(a) ≠  g  del(a) =  The result of regressing g through a is:  -1 (g,a) = (g – add(a))  pre(a)

Regression Example Take the goal: g = on(red, green), on_table(blue) Regress through action: a = move(red, blue, green) move(red,blue,green) ???

Regression Example 1.Check action is relevant: g  add(a) = { on(red, green) } ≠  g  del(a) =  2.Remove predicates from add list: on(red, green), on_table(blue) 3.Add preconditions: on_table(blue), on(red, blue), clear(red), clear(green)

Regression example 2 Consider instead the action a = move_to_table(red, blue) This has effects: add(a) = on_table(red), clear(blue) This action is not relevant as it does not achieve any of the goal predicates, ie: g  add(a) = 

Regression Example 3 Consider instead the goal g = clear(blue), clear(green) Now a = move(red, blue, green) achieves clear(blue) but is not relevant, as it conflicts with the goal: g  del(a) = { clear(green) } ≠ 

Planning as state-space search Imagine a directed graph in which nodes represent states and edges represent actions. An edge joins two nodes if there is an action that takes you from one state to the other.

Graph of state space START GOAL

Forward/Backward chaining Planning can be done as forward or backward chaining. Forward chaining starts at the initial state and searches for a path to the goal using progression. Backward chaining starts at the goal and searches for a path to the initial state using regression.

Forward Search Forward-search(s, g) if s satisfies g then return empty plan applicable = {a | a is applicable in s} if applicable =  then fail choose action a  applicable s’ =  (s,a)  ’ = Forward-search(s’,g) return a.  ’

Backward Search Backward-search(s, g) if s satisfies g then return empty plan relevant = {a | a is relevant to g} if relevant =  then fail choose action a  relevant s’ =  -1 (s,a)  ’ = Backward-search(s’,g) return  ’.a

Pruning Search As in any search problem, an important element is to prune the search. Both of these algorithms have the potential to waste time exploring loops. A record should be kept of already visited states and actions that return to these states should be pruned. Backwards search can produce inconsistent goals (usually not pruned)

Instantiating Schema When using action schema often it is more efficient to instantiate schema variables on the fly, by unification, rather than generating and testing all instances.

Planning as Search Forward-search, backward-search and most other planning algorithms can be described in a similar structure: 1.Generate possible branches. 2.Prune those that are no good. 3.Select one remaining branch 4.Recurse

Bi-directional search We can apply bidirectional search to state- space planning, doing both progression from the start state and regression from the goal. Typical heuristic: –try both, then repeat whichever expansion took less time (as it is likely to again)

Heuristics for planning “cost to goal” = number of plan steps relaxed measure: number of plan steps disregarding delete effects h 1 (s,g) = max {h(s,p) | p  g} h 1 (s,p) = 0, if p  s h 1 (s,p) = , if p is not in add(a) for any a h 1 (s,p) = min(1 + h 1 (s,pre(a)), p  add(a)

Heuristics for planning h 1 can be extended to h 2 h 3 … considering pairs of propositions, triplets, etc. All h k are admissible h k+1 dominates h k Computing h k is O(n k ) with n propositions Generally k <= 2 is used