An Introduction to Artificial Intelligence CE 40417

Slides:



Advertisements
Similar presentations
Language for planning problems
Advertisements

Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Causal-link Planning II José Luis Ambite. 2 CS 541 Causal Link Planning II Planning as Search State SpacePlan Space AlgorithmProgression, Regression POP.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
CLASSICAL PLANNING What is planning ?  Planning is an AI approach to control  It is deliberation about actions  Key ideas  We have a model of the.
1 Graphplan José Luis Ambite * [* based in part on slides by Jim Blythe and Dan Weld]
Plan Generation & Causal-Link Planning 1 José Luis Ambite.
Graph-based Planning Brian C. Williams Sept. 25 th & 30 th, J/6.834J.
Planning Graphs * Based on slides by Alan Fern, Berthe Choueiry and Sungwook Yoon.
For Monday Finish chapter 12 Homework: –Chapter 13, exercises 8 and 15.
TWEAK solving the Sussman anomaly ON(A,B) ON(B,C) ON(C,A) ONTABLE(A) ONTABLE(B) CLEAR(C) CLEAR(B) ARMEMPTY STACK(A,B) STACK(B,C) Two step additions to.
Chapter 4 - Planning 4.1 State Space Planning 4.2 Partial Order Planning 4.3Planning in the Real World Part II: Methods of AI.
Planning Planning is fundamental to “intelligent” behaviour. E.g.
All rights reserved ©L. Manevitz Lecture 61 Artificial Intelligence Planning System L. Manevitz.
Sussman anomaly - analysis The start state is given by: ON(C, A) ONTABLE(A) ONTABLE(B) ARMEMPTY The goal by: ON(A,B) ON(B,C) This immediately leads to.
Planning CSE 473 Chapters 10.3 and 11. © D. Weld, D. Fox 2 Planning Given a logical description of the initial situation, a logical description of the.
Artificial Intelligence II S. Russell and P. Norvig Artificial Intelligence: A Modern Approach Chapter 11: Planning.
1 Chapter 4 State-Space Planning. 2 Motivation Nearly all planning procedures are search procedures Different planning procedures have different search.
1 Classical STRIPS Planning Alan Fern * * Based in part on slides by Daniel Weld.
Planning Russell and Norvig: Chapter 11. Planning Agent environment agent ? sensors actuators A1A2A3.
CPSC 322 Introduction to Artificial Intelligence November 26, 2004.
1 Lecture 12 example (from slides prepared by Prof. J. Rosenchein)
Planning II CSE 473. © Daniel S. Weld 2 Logistics Tournament! PS3 – later today Non programming exercises Programming component: (mini project) SPAM detection.
Planning: Part 1 Representation and State-space Search COMP151 March 30, 2007.
Intro to AI Fall 2002 © L. Joskowicz 1 Introduction to Artificial Intelligence LECTURE 12: Planning Motivation Search, theorem proving, and planning Situation.
1 Pertemuan 17 Planning Matakuliah: T0264/Intelijensia Semu Tahun: Juli 2006 Versi: 2/1.
1 Planning Chapters 11 and 12 Thanks: Professor Dan Weld, University of Washington.
Planning II CSE 573. © Daniel S. Weld 2 Logistics Reading for Wed Ch 18 thru 18.3 Office Hours No Office Hour Today.
PLANNING Partial order regression planning Temporal representation 1 Deductive planning in Logic Temporal representation 2.
An Introduction to Artificial Intelligence CE Chapter 11 – Planning Ramin Halavati In which we see how an agent can take.
Classical Planning Chapter 10.
GraphPlan Alan Fern * * Based in part on slides by Daniel Weld and José Luis Ambite.
For Wednesday Read chapter 12, sections 3-5 Program 2 progress due.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
1 07. The planning problem 2  Inputs: 1. A description of the world state 2. The goal state description 3. A set of actions  Output: A sequence of actions.
April 3, 2006AI: Chapter 11: Planning1 Artificial Intelligence Chapter 11: Planning Michael Scherger Department of Computer Science Kent State University.
CS.462 Artificial Intelligence SOMCHAI THANGSATHITYANGKUL Lecture 07 : Planning.
For Monday Read chapter 12, sections 1-2 Homework: –Chapter 10, exercise 3.
CPS 270: Artificial Intelligence Planning Instructor: Vincent Conitzer.
For Friday No reading Homework: –Chapter 11, exercise 4.
CPS 570: Artificial Intelligence Planning Instructor: Vincent Conitzer.
Automated Planning and Decision Making Prof. Ronen Brafman Automated Planning and Decision Making Partial Order Planning Based on slides by: Carmel.
1 Chapter 16 Planning Methods. 2 Chapter 16 Contents (1) l STRIPS l STRIPS Implementation l Partial Order Planning l The Principle of Least Commitment.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
1/16 Planning Chapter 11- Part1 Author: Vali Derhami.
Classical Planning Chapter 10 Mausam / Andrey Kolobov (Based on slides of Dan Weld, Marie desJardins)
Graphplan.
Automated Planning and Decision Making Prof. Ronen Brafman Automated Planning and Decision Making Graphplan Based on slides by: Ambite, Blyth and.
Planning in FOL Systems sequences of actions to achieve goals.
1 Chapter 6 Planning-Graph Techniques. 2 Motivation A big source of inefficiency in search algorithms is the branching factor  the number of children.
For Wednesday Read Chapter 18, sections 1-3 Homework:
Introduction Contents Sungwook Yoon, Postdoctoral Research Associate
Introduction Defining the Problem as a State Space Search.
Class #17 – Thursday, October 27
AI Planning.
Planning José Luis Ambite.
Graph-based Planning Slides based on material from: Prof. Maria Fox
Graphplan/ SATPlan Chapter
Planning Chapter
Planning Problems On(C, A)‏ On(A, Table)‏ On(B, Table)‏ Clear(C)‏
Class #19 – Monday, November 3
Chapter 6 Planning-Graph Techniques
Graphplan/ SATPlan Chapter
CS344 : Introduction to Artificial Intelligence
Russell and Norvig: Chapter 11 CS121 – Winter 2003
Graphplan/ SATPlan Chapter
GraphPlan Jim Blythe.
Graph-based Planning Slides based on material from: Prof. Maria Fox
Prof. Pushpak Bhattacharyya, IIT Bombay
[* based in part on slides by Jim Blythe and Dan Weld]
Presentation transcript:

An Introduction to Artificial Intelligence CE 40417 Chapter 11 – Planning Ramin Halavati (halavati@ce.sharif.edu) In which we see how an agent can take advantage of a problem to construct complex plans of actions.

What is planning We have some Operators. We have a Current state. We have a Goal State. We want to know: How to arrange the operators to reach the Goal State from Current State.

Air Cargo Transfer Example What’s in Domain: We have a set of airports as SFO,JFK, ... We have a set of cargos as C1, C2, … We have some airplanes as P1, P2, … State: Plans and Cargos are at specific airports and we want to change the positions. Actions: Load (Cargo, Plane, Airport) Fly (Plane, Airport1, Airport2) Unload (Cargo, Plane, Airport)

Blocks World Example Domain Objects: States: Actions: A set of blocks and a table. States: Blocks are stacked on each other and the table and we want to change their positions. Actions: PickUp( Block ), PutDown( Block) Unstack( Block, Block) Stack( Block, Block ) A C B A C B

Domain Definition Example 1 AIR CARGO TRANPORT DOMAIN: Objects: SFO , JFK, C1 , C2 , P1 , P2. Predicates: At( C1, SFO ) In( C1, P2 ) Plane( P1) Cargo(C1) Actions: …

Domain Definition Ex.1(cont.) Actions: Name, Parameters, Preconditions, Effects. LOAD( c, p , a ) Prec.: At(c,a), At(p,a), Cargo(c),Plane(p),Airport(a). Effects: ~At(c,a), In(c,p)

Domain Definition Ex.1 (cont.) Actions: UNLOAD( c, p , a ) Prec.: In(c,p), At(p,a), Cargo(c),Plane(p),Airport(a). Effects: At(c,a), ~In(c,p) FLY( p , a1, a2 ) At(p,a1) Plane(p),Airport(a1) ,Airport(a2). ~At(p,a1), At(p,a2)

Domain Definition Example 2 BLOCKS WORLD DOMAIN Objects: A, B, C, … (the blocks) & the ROBOT. Predicates: On( x , y ). OnTable( x ). Holding( x ). HandEmpty. Clear( x ). Block( x ).

Domain Definition Ex.2(cont.) Actions: UnStack( x , y ): Prec: On(x,y), HandEmpty, Clear( x ). Effects: ~On(x,y), Holding(x), ~Clear(x), Clear(y). Stack( x,y ): Holding(x), Clear(y) On(x,y), HandEmpty, ~Holding(x), Clear( x ), ~Clear(y). NOTE: Nothing about what is block and what’s not. Y X

Domain Definition Ex.2(cont.) Actions: PickUp( x ): Prec: HandEmpty, Clear( x ), OnTable( x ). Effects: Holding(x), ~Clear(x), ~OnTable( x ). PutDown( x ): Holding(x). OnTable( x ), HandEmpty, Clear( x ).

Propblem Definition Ex.2 PROBLEM DEFINITION: Initial State: On(C,A), Clear(C), ~Clear(A), OnTable(A), Clear(B), OnTable(B), HandEmpty. Goal State: HandEmpty, Clear(A), On(A,B), On(B,C), OnTable(C). A C B A C B

Simplest Approach It’s all about SEARCH. States: Next State Generator: As described before. Next State Generator: Which actions are applicable, apply every one of em. Path Cost: One for each action. Goal Test: Has goal state reached?

C A B Initial state Goal

Simplest Approach Progression (Forward Search): Start from Initial State, move forward till you reach goal state. A C B A C B A C B A C B A C B A C B A C B A C B . . . NOTE: Backtracking is mandatory.

Simplest Approach Regression (Backward Search): Put Goal State’s predicates in Agenda. Recursively fetch an item from agenda Find something to satisfy it. Remove all its effects from agenda. Add all its preconditions to agenda.

Regression Example On(A,B), On(B,C), OnTable(C) 1. Pick Goal: On(A,B) 2. Choose Action: Stack(A,B) 3. Add actions preconditions to agenda and remove its effects from it. On(B,C), OnTable(C), Holding(A), Clear(B). 1. Pick Goal: Holding(A) 2. Choose Action: PickUp(A). 3. ... On(B,C), OnTable(C), Clear(B), HandEmpty, OnTable(A) ...

Pure Search Approaches Heuristics: Using Relaxed Domain Definition: Assume actions have no precondition. Assume actions have no negative effects. … (They are all admissible). Sub-Goal Independence Assumption Assuming each goal can be achieved with a sub-plan, regardless of other necessities. (Not necessarily admissible, depends on the domain).

Simplest Approach What’s wrong with Search? Branching Factor may be too big. The search space is reversible, resulting in infinite loops and repeated states. Simple Search is the least that we can do.

Partial Order Planning

Partial Order Planning We do not need to start from the beginning of plan and march to end. Some steps, facts, etc. are more important, we can decide on them ahead. We can impose least possible commitments during the task.

A C B A C B Note: Not all results of each action is mentioned. Holding(A) Clear(B) On(A,B) H.E. Clear(A) STACK(A,B) A C B A C B START: On(C,A) OnTable(A) OnTable(B) Clear(C) Clear(A) END : On(A,B) On(B,C) OnTable(C) Holding(B) Clear(C) On(B,C) H.E. Clear(B) STACK(B,C) OnTable(B) H.E. Holding(B) ~Clear(B) ~OnTable(B) PickUp(B) Note: Not all results of each action is mentioned.

Ordering

Partial Order Planning Assume an action called START. No precondition. All ‘Initial State’ as its effects. Assume an action called END. All ‘Goal State’ as its precondition. No Effect.

Partial Order Planning Partial Plan is a (A,O,L,Agenda) where: A: set of actions in the plan Initially {Start, End} O: temporal orderings between actions Initially {Start<End} Agenda: open preconditions that need to be satisfied along with the action which needs them. Initially all preconditions of End such as {(BeHome,End),(HaveMoney,End)}.

Partial Order Planning L: The set of Causal Links Initially Empty. Causal Link: Action A2 has precondition Q that is established in the plan by action A1. A1 A2 Q Clear(B) Unstack(C,B) Putdown(A,B)

Partial Order Planning Example: A ={Start, Stack(B,C), End} O ={Start<End, Stack(B,C)<End} L ={(Stack(B,C),On(B,C),End)} Agenda={(On(A,B),End), (OnTable(C),End), (Holding(B),Stack(B,C), (Clear(C),Stack(B,C)}. Holding(B) Clear(C) On(B,C) H.E. Clear(B) STACK(B,C) START: On(C,A) OnTable(A) OnTable(B) Clear(C) Clear(A) END: On(A,B) On(B,C) OnTable(C)

Partial Order Planning A causal link (A1, Q, A2) represents the assertion that the role of A1 is to establish proposition Q for A2. This tells future search steps to “protect” Q in the interval between A1 and A2. Action B threatens causal link (A1, Q, A2) if: 1. B has Q as a delete effect, and 2. B could come between A1 and A2, i.e. O  (A1 < B < A2) is consistent. For example: PutDown(C,B) is a threat for: Clear(B) Unstack(C,B) PutDown(A,B)

Finally, POP’s Code. POP(<A,O,L>, agenda) 1. Termination: If agenda is empty return <A,O,L> 2. Goal selection: Let <Q,Aneed> be a pair on the agenda 3. Action selection: Let Aadd = choose an action that adds Q if no such action exists, then return failure Let L’= L  {Aadd → Aneed}, and let O’ = O  {Aadd < Aneed}. If Aadd is newly instantiated, then A’ = A  {Aadd} and O’ = O  {A0 < Aadd < A} (otherwise, let A’ = A) Q 4. Updating of goal set: Let agenda’ = agenda -{<Q,Aneed>}. If Aadd is newly instantiated, then for each conjunction, Qi, of its precondition, add <Qi,Aadd> to agenda’ 5. Causal link protection: For every action At that might threaten a causal link Ap → Ac, add a consistent ordering constraint, either Demotion: Add At < Ap to O’ Promotion: Add Ac < At to O’ Inequality constraints If neither constraint is consistent, then return failure p 6. Recursive invocation: POP((<A’,O’,L’>, agenda’)

Last POP Notes. Using Variables: Heuristics: You need not add UnStack(A,B) when you need Clear(B). Just add Unstack(x,B) and add binding as a next step. Heuristics: What to do in ChooseGoal and ChooseAction?

Planning Graph Main Idea: To construct a graph of possible outcomes. …

Dinner Date Domain Initial Conditions: (and (garbage) (cleanHands) (quiet)) Goal: (and (dinner) (present) (not (garbage)) Actions: Cook :precondition (cleanHands) :effect (dinner) Wrap :precondition (quiet) :effect (present) Carry :precondition :effect (and (not (garbage)) (not (cleanHands)) Dolly :precondition :effect (and (not (garbage)) (not (quiet)))

Dinner Date Graph

Mutual Exclusion Classes Interference (Prec-Effect) Inconsistent Effects Inconsistent Support Competing Needs

Propositions monotonically increase (always carried forward by no-ops) Observation 1 p ¬q ¬r p q ¬q ¬r p q ¬q r ¬r p q ¬q r ¬r A A A B B Propositions monotonically increase (always carried forward by no-ops)

Observation 2 Actions monotonically increase p ¬q ¬r p q ¬q ¬r p q ¬q

Observation 3 Proposition Mutex relationships monotonically decrease p q r … p q r … p q r … A Proposition Mutex relationships monotonically decrease

Observation 4 Action mutex relationships monotonically decrease A A A q … p q r s … p q r s … p q r s … B B B C C C Action mutex relationships monotonically decrease

Observation 5 (Sum Up) Planning Graph ‘levels off’. After some time k, all levels are identical. Because it’s a finite space, the set of literals never decreases and mutexes don’t reappear.

Graph Plan Algorithm Graph Plan Algorithm: Grow the planning graph (PG) until all goals are reachable and not mutex. (If PG levels off first, fail) Search the PG for a valid plan If non found, add a level to the PG and try again

Search for a solution plan

Search for a solution plan Backward chain on the planning graph Achieve goals level by level At level k, pick a subset of non-mutex actions to achieve current goals. Their preconditions become the goals for k-1 level. Build goal subset by picking each goal and choosing an action to add. Use one already selected if possible. Do forward checking on remaining goals (backtrack if can’t pick non-mutex action)

Just Another Planning Approach Planning By Logic (SAT-Plan): Convert the planning problem into a logic problem. Solve the logic problem.

SAT Plan Example INITIAL STATE: At(P1,SFO)0  AT(C1,JFK)0  Plane(P1)  Cargo(C1)  Airport(SFO)  Airport(JFK). RULES: At(x,y)t  FLY(x,y,z)t  Airplane(x)  Airport(y)  Airport(z) At(x,z)t+1  At(x,y)t+1 … GOAL STATE: AT(C1,SFO)x

Sum Up POP: Most human-like. Graph Plan: Winner of planning contests. SAT Plan: Widely used in real problem as: Hardcode logic solvers. Mathematics and Optimization. Note: Combinations are also used.

EXERCISES & Projects Implement either POP or Graph-Plan. As Exercise: On a hard-coded domain without variable-instantiating. – Send To: sharifian@ce.sharif.edu – Subject: AIEX-C11 As Project: Read the domain as PDDL, have variable instantiation, and all.