Download presentation
1
State-Space Planning Sources: Ch. 3 Appendix A
Slides from Dana Nau’s lecture Dr. Héctor Muñoz-Avila
2
Reminder: Some Graph Search Algorithms (I)
20 G = (V,E) 10 G Z Edges: set of pairs of vertices (v,v’) 9 16 B F 6 C 8 24 A 4 6 Vertices E D 20 Breadth-first search: Use a queue (FIFO policy) to control order of visits State-firsts search: Use an stack (FILA policy) to control order of visits Best-first search: minimize an objective function f(): e.g., distance. Djikstra Greedy search: select best local f() and “never look back” Hill-climbing search: like greedy search but every node is a solution Iterative deepening: exhaustively explore up to depth i
3
State-Space Planning C A B C A B C B A B A C B A C B A B C C B A B A C
Search is performed in the state space State space V: set of states E: set of transitions (s, (s,a)) But: computed graph is a subset of state space Remember the number of states for DWR states? Search modes: Forward Backward C A B C B A B A C B A C B A B C C B A B A C A C A A B C B C C B A A B C
4
Nondeterministic Search
The oracle definition: given a choice between possible transitions, there is a device, called the oracle, that always chooses the transition that leads to a state satisfying the goals ... … ... Oracle chooses this transition because it leads to a favorable state “current state” ... Yes, they are both equivalent. One can exhaustively try all possible derivations to find the one selected by the oracle. Efficiency issues (i.e., oracle would find a favorable state faster) are not considered (topic of CSE 340) Alternative definition (book): parallel computers
5
Forward Search goals+(g) s and goals–(g) s = satisfied take c3 …
move r1 …
6
Soundness and Completeness
A planning algorithm A can be sound, complete, and admissible Soundness: Deterministic: Given a problem and A returns P failure then P is a solution for the problem Nondeterministic: Every plan returned by A is a solution Completeness: Deterministic: Given a solvable problem then A returns P failure Nondeterministic: Given a solvable problem then A returns a solution Admissible: If there is a measure of optimality, and a solvable problem is given, then A returns an optimal solution Result: Forward-search is sound and complete Proof: …
7
Reminder: Plans and Solutions
Plan: any sequence of actions = a1, a2, …, an such that each ai is a ground instance of an operator in O The plan is a solution for P=(O,s0,g) if it is executable and achieves g i.e., if there are states s0, s1, …, sn such that (s0,a1) = s1 (s1,a2) = s2 … (sn–1,an) = sn sn satisfies g Example of a deterministic variant of Forward-search that is not sound? Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
8
Deterministic Implementations
Some deterministic implementations of forward search: breadth-first search depth-first search best-first search (e.g., A*) greedy search Breadth-first and best-first search are sound and complete But they usually aren’t practical because they require too much memory Memory requirement is exponential in the length of the solution In practice, more likely to use depth-first search or greedy search Worst-case memory requirement is linear in the length of the solution In general, sound but not complete But classical planning has only finitely many states Thus, can make depth-first search complete by doing loop-checking s1 a1 s4 a4 s0 s2 sg a2 s5 a5 … a3 s3 So why they are typically not used? How can we make it complete? Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
9
Another domain: The Blocks World
Infinitely wide table, finite number of children’s blocks Ignore where a block is located on the table A block can sit on the table or on another block Want to move blocks from one configuration to another e.g., initial state goal Can be expressed as a special case of DWR But the usual formulation is simpler a d b c c a b e
10
Classical Representation: Symbols
Constant symbols: The blocks: a, b, c, d, e Predicates: ontable(x) - block x is on the table on(x,y) - block x is on block y clear(x) - block x has nothing on it holding(x) - the robot hand is holding block x handempty - the robot hand isn’t holding anything d c a b e Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
11
Classical Operators c a b unstack(x,y)
Precond: on(x,y), clear(x), handempty Effects: ~on(x,y), ~clear(x), ~handempty, holding(x), clear(y) stack(x,y) Precond: holding(x), clear(y) Effects: ~holding(x), ~clear(y), on(x,y), clear(x), handempty pickup(x) Precond: ontable(x), clear(x), handempty Effects: ~ontable(x), ~clear(x), ~handempty, holding(x) putdown(x) Precond: holding(x) Effects: ~holding(x), ontable(x), clear(x), handempty c a b c a b c b a c a b Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
12
Branching Factor of Forward Search
… a50 a3 initial state goal Forward search can have a very large branching factor E.g., many applicable actions that don’t progress toward goal Why this is bad: Deterministic implementations can waste time trying lots of irrelevant actions Need a good heuristic function and/or pruning procedure Chad and Hai are going to explain how to do this
13
How do we write this formally?
Backward Search For forward search, we started at the initial state and computed state transitions new state = (s,a) For backward search, we start at the goal and compute inverse state transitions new set of subgoals = –1(g,a) To define -1(g,a), must first define relevance: An action a is relevant for a goal g if a makes at least one of g’s literals true g effects(a) ≠ a does not make any of g’s literals false g+ effects–(a) = and g– effects+(a) = How do we write this formally? Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
14
Inverse State Transitions
If a is relevant for g, then –1(g,a) = (g – effects+(a)) precond(a) Otherwise –1(g,a) is undefined Example: suppose that g = {on(b1,b2), on(b2,b3)} a = stack(b1,b2) What is –1(g,a)? What is –1(g,a) = {s : (s,a) satisfies g}?
15
How about using –1(g,a) instead of –1(g,a)?
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
16
Efficiency of Backward Search
… a50 a3 initial state goal Compared with forward search? Backward search can also have a very large branching factor E.g., many relevant actions that don’t regress toward the initial state As before, deterministic implementations can waste lots of time trying all of them Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
17
Lifting p(a,a) foo(a,a) p(a,b) foo(x,y) precond: p(x,y) effects: q(x) foo(a,b) p(a,c) q(a) foo(a,c) … Can reduce the branching factor of backward search if we partially instantiate the operators this is called lifting foo(a,y) q(a) p(a,y) Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
18
Lifted Backward Search
More complicated than Backward-search Have to keep track of what substitutions were performed But it has a much smaller branching factor Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
19
The Search Space is Still Too Large
Lifted-backward-search generates a smaller search space than Backward-search, but it still can be quite large Suppose a, b, and c are independent, d must precede all of them, and d cannot be executed We’ll try all possible orderings of a, b, and c before realizing there is no solution More about this in Chapter 5 (Plan-Space Planning) d a b c d b a d b a goal b d a c d b c a d c b Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
20
STRIPS π the empty plan do a modified backward search from g
instead of -1(s,a), each new set of subgoals is just precond(a) whenever you find an action that’s executable in the current state, then go forward on the current search path as far as possible, executing actions and appending them to π repeat until all goals are satisfied π = a6, a4 s = ((s0,a6),a4) g6 satisfied in s0 g1 a1 a4 a6 g4 g2 g a2 g3 a3 g5 a5 a3 g3 Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: current search path
21
The Sussman Anomaly Initial state goal
c b a b c Initial state goal On this problem, STRIPS can’t produce an irredundant solution Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
22
The Register Assignment Problem
State-variable formulation: Initial state: {value(r1)=3, value(r2)=5, value(r3)=0} Goal: {value(r1)=5, value(r2)=3} Operator: assign(r,v,r’,v’) precond: value(r)=v, value(r’)=v’ effects: value(r)=v’ STRIPS cannot solve this problem at all Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
23
How to Fix? Several ways: Do something other than state-space search
e.g., Chapters 5–8 Use forward or backward state-space search, with domain-specific knowledge to prune the search space Can solve both problems quite easily this way Example: block stacking using forward search Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
24
Domain-Specific Knowledge
A blocks-world planning problem P = (O,s0,g) is solvable if s0 and g satisfy some simple consistency conditions g should not mention any blocks not mentioned in s0 a block cannot be on two other blocks at once etc. Can check these in time O(n log n) If P is solvable, can easily construct a solution of length O(2m), where m is the number of blocks Move all blocks to the table, then build up stacks from the bottom Can do this in time O(n) With additional domain-specific knowledge can do even better …
25
Additional Domain-Specific Knowledge
A block x needs to be moved if any of the following is true: s contains ontable(x) and g contains on(x,y) s contains on(x,y) and g contains ontable(x) s contains on(x,y) and g contains on(x,z) for some y≠z s contains on(x,y) and y needs to be moved Axioms initial state goal e d b a c Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
26
Domain-Specific Algorithm
loop if there is a clear block x such that x needs to be moved and x can be moved to a place where it won’t need to be moved then move x to that place else if there is a clear block x such that x needs to be moved then move x to the table else if the goal is satisfied then return the plan else return failure repeat initial state goal e d b a c Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
27
Easily Solves the Sussman Anomaly
loop if there is a clear block x such that x needs to be moved and x can be moved to a place where it won’t need to be moved then move x to that place else if there is a clear block x such that x needs to be moved then move x to the table else if the goal is satisfied then return the plan else return failure repeat initial state goal a c b a b c
28
Properties The block-stacking algorithm:
Sound, complete, guaranteed to terminate Runs in time O(n3) Can be modified to run in time O(n) Often finds optimal (shortest) solutions But sometimes only near-optimal (Exercise 4.22 in the book) PLAN LENGTH for the blocks world is NP-complete Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
29
An Actual Application: FEAR.
F.E.A.R.: First Encounter Assault Recon Here is a video clip Behavior of opposing NPCs is modeled through STRIPS operators Attack Take cover … Why? Otherwise you would need too build an FSM for all possible behaviours in advance
30
FSM vs AI Planning FSM: Planning Operators A resulting plan:
Patrol Preconditions: No Monster Effects: patrolled Fight Monster in sight Patrol Fight Monster In Sight No Monster FSM: A resulting plan: Patrol patrolled Fight No Monster Monster in sight Neither is more powerful than the other one
31
But Planning Gives More Flexibility
“Separates implementation from data” --- Orkin reasoning knowledge Many potential plans: Patrol Fight … Planning Operators Patrol Preconditions: No Monster Effects: patrolled Fight Monster in sight … If conditions in the state change making the current plan unfeasible: replan!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.