CPS 570: Artificial Intelligence Planning

Slides:



Advertisements
Similar presentations
Lirong Xia Planning Tuesday, April 22, The kernel trick Neuron network Clustering (k-means) 1 Last time.
Advertisements

Language for planning problems
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
CLASSICAL PLANNING What is planning ?  Planning is an AI approach to control  It is deliberation about actions  Key ideas  We have a model of the.
Planning Graphs * Based on slides by Alan Fern, Berthe Choueiry and Sungwook Yoon.
Planning Planning is fundamental to “intelligent” behaviour. E.g.
All rights reserved ©L. Manevitz Lecture 61 Artificial Intelligence Planning System L. Manevitz.
Sussman anomaly - analysis The start state is given by: ON(C, A) ONTABLE(A) ONTABLE(B) ARMEMPTY The goal by: ON(A,B) ON(B,C) This immediately leads to.
Planning Russell and Norvig: Chapter 11 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2003/ home.htm.
Planning CSE 473 Chapters 10.3 and 11. © D. Weld, D. Fox 2 Planning Given a logical description of the initial situation, a logical description of the.
Artificial Intelligence II S. Russell and P. Norvig Artificial Intelligence: A Modern Approach Chapter 11: Planning.
1 Chapter 4 State-Space Planning. 2 Motivation Nearly all planning procedures are search procedures Different planning procedures have different search.
1 Classical STRIPS Planning Alan Fern * * Based in part on slides by Daniel Weld.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Artificial Intelligence Chapter 11: Planning
Planning Russell and Norvig: Chapter 11. Planning Agent environment agent ? sensors actuators A1A2A3.
1 DCP 1172 Introduction to Artificial Intelligence Chang-Sheng Chen Topics Covered: Introduction to Planning (generalized state-space search) Means-Ends.
Planning Problem Solving  Planning –Action Centred –More flexible decision strategies A Representation for Planning –Add & Delete Lists Planning Techniques.
1 Lecture 12 example (from slides prepared by Prof. J. Rosenchein)
Intro to AI Fall 2002 © L. Joskowicz 1 Introduction to Artificial Intelligence LECTURE 12: Planning Motivation Search, theorem proving, and planning Situation.
1 Action Planning (Where logic-based representation of knowledge makes search problems more interesting) R&N: Chap. 11, Sect. 11.1–4.
AI Principles, Lecture on Planning Planning Jeremy Wyatt.
Lirong Xia Friday, April 25, 2014 Introduction to Game Theory.
PLANNING Partial order regression planning Temporal representation 1 Deductive planning in Logic Temporal representation 2.
An Introduction to Artificial Intelligence CE Chapter 11 – Planning Ramin Halavati In which we see how an agent can take.
Classical Planning Chapter 10.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
1 07. The planning problem 2  Inputs: 1. A description of the world state 2. The goal state description 3. A set of actions  Output: A sequence of actions.
April 3, 2006AI: Chapter 11: Planning1 Artificial Intelligence Chapter 11: Planning Michael Scherger Department of Computer Science Kent State University.
22/11/04 AIPP Lecture 16: More Planning and Operators1 More Planning Artificial Intelligence Programming in Prolog.
CS.462 Artificial Intelligence SOMCHAI THANGSATHITYANGKUL Lecture 07 : Planning.
For Monday Read chapter 12, sections 1-2 Homework: –Chapter 10, exercise 3.
Artificial Intelligence Chapter 22 Planning Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
CPS 270: Artificial Intelligence Planning Instructor: Vincent Conitzer.
1 Search vs. planning Situation calculus STRIPS operators Search vs. planning Situation calculus STRIPS operators.
CPS 570: Artificial Intelligence Planning Instructor: Vincent Conitzer.
1 Chapter 16 Planning Methods. 2 Chapter 16 Contents (1) l STRIPS l STRIPS Implementation l Partial Order Planning l The Principle of Least Commitment.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
1/16 Planning Chapter 11- Part1 Author: Vali Derhami.
Classical Planning Chapter 10 Mausam / Andrey Kolobov (Based on slides of Dan Weld, Marie desJardins)
Graphplan.
Planning I: Total Order Planners Sections
April 3, 2006AI: Chapter 11: Planning1 Artificial Intelligence Chapter 11: Planning.
Planning in FOL Systems sequences of actions to achieve goals.
1 Chapter 6 Planning-Graph Techniques. 2 Motivation A big source of inefficiency in search algorithms is the branching factor  the number of children.
An Introduction to Artificial Intelligence CE 40417
L9. Planning Agents L7_exAnswer and explanation Review
Planning (Chapter 10)
Instructor: Vincent Conitzer
Introduction Contents Sungwook Yoon, Postdoctoral Research Associate
Planning (AIMA Ch. 10) Planning problem defined Simple planning agent
Planning: Heuristics and CSP Planning
Class #17 – Thursday, October 27
AI Planning.
Graphplan/ SATPlan Chapter
L11. Planning Agents and STRIPS
Planning Chapter
Planning Problems On(C, A)‏ On(A, Table)‏ On(B, Table)‏ Clear(C)‏
Class #19 – Monday, November 3
Chapter 6 Planning-Graph Techniques
Graphplan/ SATPlan Chapter
CS344 : Introduction to Artificial Intelligence
Russell and Norvig: Chapter 11 CS121 – Winter 2003
Graphplan/ SATPlan Chapter
L9. STRIPS Examples Recursive STRIPS Block world
GraphPlan Jim Blythe.
Artificial Intelligence Planning
Prof. Pushpak Bhattacharyya, IIT Bombay
[* based in part on slides by Jim Blythe and Dan Weld]
Presentation transcript:

CPS 570: Artificial Intelligence Planning Instructor: Vincent Conitzer

Planning We studied how to take actions in the world (search) We studied how to represent objects, relations, etc. (logic) Now we will combine the two!

State of the world (STRIPS language) State of the world = conjunction of positive, ground, function-free literals At(Home) AND IsAt(Umbrella, Home) AND CanBeCarried(Umbrella) AND IsUmbrella(Umbrella) AND HandEmpty AND Dry Not OK as part of the state: NOT(At(Home)) (negative) At(x) (not ground) At(Bedroom(Home)) (uses the function Bedroom) Any literal not mentioned is assumed false Other languages make different assumptions, e.g., negative literals part of state, unmentioned literals unknown

An action: TakeObject TakeObject(location, x) Preconditions: HandEmpty CanBeCarried(x) At(location) IsAt(x, location) Effects (“NOT something” means that that something should be removed from state): Holding(x) NOT(HandEmpty) NOT(IsAt(x, location))

Another action WalkWithUmbrella(location1, location2, umbr) Preconditions: At(location1) Holding(umbr) IsUmbrella(umbr) Effects: At(location2) NOT(At(location1))

Yet another action WalkWithoutUmbrella(location1, location2) Preconditions: At(location1) Effects: At(location2) NOT(At(location1)) NOT(Dry)

A goal and a plan Goal: At(Work) AND Dry Recall initial state: At(Home) AND IsAt(Umbrella, Home) AND CanBeCarried(Umbrella) AND IsUmbrella(Umbrella) AND HandEmpty AND Dry TakeObject(Home, Umbrella) At(Home) AND CanBeCarried(Umbrella) AND IsUmbrella(Umbrella) AND Dry AND Holding(Umbrella) WalkWithUmbrella(Home, Work, Umbrella) At(Work) AND CanBeCarried(Umbrella) AND IsUmbrella(Umbrella) AND Dry AND Holding(Umbrella)

Planning to write a paper Suppose your goal is to be a co-author on an AI paper with both theorems and experiments, within a year ProveTheorems(x) Preconditions: Knows(x,AI), Knows(x,Math), Idea Effect: Theorems, Contributed(x) LearnAbout(x,y) Preconditions: HasTimeForStudy(x) Effects: Knows(x,y), NOT(HasTimeForStudy(x)) PerformExperiments(x) Preconditions: Knows(x,AI), Knows(x,Coding), Idea Effect: Experiments, Contributed(x) HaveNewIdea(x) Preconditions: Knows(x,AI), Creative(x) Effects: Idea, Contributed(x) WritePaper(x) Preconditions: Knows(x,AI), Knows(x,Writing), Idea, Theorems, Experiments Effect: Paper, Contributed(x) FindExistingOpenProblem(x) Preconditions: Knows(x,AI) Effects: Idea Name a few things that are missing/unrealistic… Goal: Paper AND Contributed(You)

Some start states Start1: HasTimeForStudy(You) AND Knows(You,Math) AND Knows(You,Coding) AND Knows(You,Writing) Start2: HasTimeForStudy(You) AND Creative(You) AND Knows(Advisor,AI) AND Knows(Advisor,Math) AND Knows(Advisor,Coding) AND Knows(Advisor,Writing) (Good luck with that plan…) Start3: Knows(You,AI) AND Knows(You,Coding) AND Knows(OfficeMate,Math) AND HasTimeForStudy(OfficeMate) AND Knows(Advisor,AI) AND Knows(Advisor,Writing) Start4: HasTimeForStudy(You) AND Knows(Advisor,AI) AND Knows(Advisor,Math) AND Knows(Advisor,Coding) AND Knows(Advisor,Writing)

Forward state-space search (progression planning) Successors: all states that can be reached with an action whose preconditions are satisfied in current state WalkWithUmbrella( Home, Work, Umbrella) At(Home) Holding(Umbrella) CanBeCarried(Umbrella) IsUmbrella(Umbrella) Dry At(Work) Holding(Umbrella) CanBeCarried(Umbrella) IsUmbrella(Umbrella) Dry TakeObject(Home, Umbrella) WalkWithoutUm brella(Home, Work) At(Home) IsAt(Umbrella, Home) CanBeCarried(Umbrella) IsUmbrella(Umbrella) HandEmpty Dry GOAL! At(Work) IsAt(Umbrella, Home) CanBeCarried(Umbrella) IsUmbrella(Umbrella) HandEmpty WalkWithout Umbrella(Wor k, Home) At(Home) IsAt(Umbrella, Home) CanBeCarried(Umbrella) IsUmbrella(Umbrella) HandEmpty WalkWithoutUm brella(Home, Work) WalkWithoutUmbrella( Home, Umbrella) (!)

Backward state-space search (regression planning) Predecessors: for every action that accomplishes one of the literals (and does not undo another literal), remove that literal and add all the preconditions WalkWithUmbrella( location1, Work, umbr) TakeObject(location2, umbr) At(location1) At(location2) IsAt(umbr, location2) CanBeCarried(umbr) IsUmbrella(umbr) HandEmpty Dry At(location1) Holding(umbr) IsUmbrella(umbr) Dry At(Work) Dry GOAL WalkWithUmbrella(location2, location1) This is accomplished in the start state, by substituting location1=location2=Home, umbr=Umbrella WalkWithoutUmbrella can never be used, because it undoes Dry (this is good)

Heuristics for state-space search Cost of a plan: (usually) number of actions Heuristic 1: plan for each subgoal (literal) separately, sum costs of plans Does this ever underestimate? Overestimate? Heuristic 2: solve a relaxed planning problem in which actions never delete literals (empty-delete- list heuristic) Very effective, even though requires solution to (easy) planning problem Progression planners with empty-delete-list heuristic perform well

Blocks world B D A C On(B, A), On(A, Table), On(D, C), On(C, Table), Clear(B), Clear(D)

Blocks world: Move action Move(x,y,z) Preconditions: On(x,y), Clear(x), Clear(z) Effects: On(x,z), Clear(y), NOT(On(x,y)), NOT(Clear(z))

Blocks world: MoveToTable action MoveToTable(x,y) Preconditions: On(x,y), Clear(x) Effects: On(x,Table), Clear(y), NOT(On(x,y))

Blocks world example B D A C Goal: On(A,B) AND Clear(A) AND On(C,D) AND Clear(C) A plan: MoveToTable(B, A), MoveToTable(D, C), Move(C, Table, D), Move(A, Table, B) Really two separate problem instances

A partial-order plan B D A C Start A C MoveToTable(B,A) MoveToTable(D,C) Goal: On(A,B) AND Clear(A) AND On(C,D) AND Clear(C) Move(A,Table, B) Move(C,Table, D) Any total order on the actions consistent with this partial order will work Finish

A partial-order plan (with more detail) Start On(B,A) Clear(B) On(A, Table) On(C, Table) Clear(D) On(D,C) On(B,A) Clear(B) Clear(D) On(D,C) MoveToTable(B,A) MoveToTable(D,C) Clear(A) Clear(B) On(A, Table) On(C, Table) Clear(D) Clear(C) Move(A,Table, B) Move(C,Table, D) On(A, B) Clear(A) Clear(C) On(C, D) Finish

Not everything decomposes into multiple problems: Sussman Anomaly Goal: On(A,B) AND On(B,C) Focusing on one of these two individually first does not work Optimal plan: MoveToTable(C,A), Move(B,Table,C), Move(A,Table,B)

An incorrect partial order plan for the Sussman Anomaly Start Clear(C) On(C, A) On(A, Table) On(B, Table) Clear(B) Clear(C) On(C, A) Clear(C) On(B, Table) Clear(B) MoveToTable(C,A) Move(B,Table,C) Clear(A) On(A, Table) Clear(B) Move(B,Table,C) must be after MoveToTable(C,A), otherwise it will ruin Clear(C) Move(A,Table,B) must be after Move(B,Table,C), otherwise it will ruin Clear(B) Move(A,Table,B) On(A, B) On(B, C) Finish

Corrected partial order plan for the Sussman Anomaly Start Clear(C) On(C, A) On(A, Table) On(B, Table) Clear(B) Clear(C) On(C, A) Clear(C) On(B, Table) Clear(B) MoveToTable(C,A) Move(B,Table, C) Clear(A) On(A, Table) Clear(B) Move(A,Table, B) No more flexibility in the order due to protection of causal links On(A, B) On(B, C) Finish

Searching for a partial-order plan Start At(Home) IsAt(Umbrella,Home) CanBeCarried(Umbrella) IsUmbrella(Umbrella) HandEmpty Dry At(Home) IsAt(Umbrella,Home) CanBeCarried(Umbrella) HandEmpty TakeObject(Home, Umbrella) At(Home) Holding(Umbrella) IsUmbrella(Umbrella) WalkWithUmbrella(Home, Work, Umbrella) At(Home) WalkWithoutUmbrella(Home, Work) no way to resolve conflict! At(Work) Dry Finish

Searching for partial-order plans Somewhat similar to constraint satisfaction Search state = partially completed partial order plan Not to be confused with states of the world Contains actions, ordering constraints on actions, causal links, some open preconditions Search works as follows: Choose one open precondition p, Consider all actions that achieve p (including ones already in the plan), For each such action, consider each way of resolving conflicts using ordering constraints Why do we need to consider only one open precondition (instead of all)? Is this true for backward state-space search? Tricky to resolve conflicts if we leave variables unbound E.g., if we use WalkWithUmbrella(location1, Work, umbr) without specifying what location1 or umbr is

Planning graphs … continued on board On(C, A) On(C, A) Each level has literals that “could be true” at that level Mutex (mutual exclusion) relations indicate incompatible actions/literals Move(C,A,B) On(A, Table) On(A, Table) Clear(C) Clear(C) MoveToTable(C,A) On(B, Table) On(B, Table) Clear(B) Clear(B) … continued on board Move(B,Table,C) Clear(A) On(C, B) On(C, Table) On(B, C)

Reasons for mutex relations… … between actions: Inconsistent effects: one action negates effect of the other Interference: one action negates precondition of the other Competing needs: the actions have preconditions that are mutex … between literals: Inconsistent support: any pair of actions that can achieve these literals is mutex

A problematic case for planning graphs FeedWith(x, y) Preconditions: Edible(y) Effects: NOT(Edible(y)), Fed(x) Start: Edible(Bread1), Edible(Bread2) Goal: Fed(Person1), Fed(Person2), Fed(Person3)

Planning graph for feeding Edible(Bread1) Edible(Bread1) FeedWith(Person1, Bread1) Any two of these could simultaneously be true at time 1, so no mutex relations Really need 3-way mutex relations, but experimentally this is computationally not worthwhile FeedWith(Person2, Bread1) Fed(Person1) FeedWith(Person3, Bread1) Fed(Person2) FeedWith(Person1, Bread2) Fed(Person3) FeedWith(Person2, Bread2) FeedWith(Person3, Bread2) Edible(Bread2) Edible(Bread2)

Uses of planning graphs If the goal literals do not all appear at a level (or have mutex relations) then we know we need another level Converse does not hold Useful heuristic: first time that all goal literals appear at a level, with no mutex relations Graphplan algorithm: once all goal literals appear, try to extract solution from graph Can use CSP techniques by labeling each action as “in the plan” or “out of the plan” In case of failure, generate another level

Example Fast-Forward planner… … with towers of Hanoi example… https://fai.cs.uni-saarland.de/hoffmann/ff.html … with towers of Hanoi example… http://www.tempastic.org/vhpop/ … in course directory: ./ff -o hanoi-domain.pddl -f hanoi-3.pddl Btw., why is towers of Hanoi solvable with any number of disks?