1 Chapter 7 Propositional Satisfiability Techniques.

Slides:



Advertisements
Similar presentations
SAT-Based Planning Using Propositional SAT- Solvers to Search for Plans.
Advertisements

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
1 Graphplan José Luis Ambite * [* based in part on slides by Jim Blythe and Dan Weld]
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Mar, 4, 2015 Slide credit: some slides adapted from Stuart.
Planning with Constraints Graphplan & SATplan Yongmei Shi.
Graph-based Planning Brian C. Williams Sept. 25 th & 30 th, J/6.834J.
1 Planning as X X  {SAT, CSP, ILP, …} José Luis Ambite* [* Some slides are taken from presentations by Kautz, Selman, Weld, and Kambhampati. Please visit.
Properties of SLUR Formulae Ondřej Čepek, Petr Kučera, Václav Vlček Charles University in Prague SOFSEM 2012 January 23, 2012.
Planning CSE 473 Chapters 10.3 and 11. © D. Weld, D. Fox 2 Planning Given a logical description of the initial situation, a logical description of the.
Methods for SAT- a Survey Robert Glaubius CSCE 976 May 6, 2002.
08/1 Foundations of AI 8. Satisfiability and Model Construction Davis-Putnam, Phase Transitions, GSAT Wolfram Burgard and Bernhard Nebel.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Presented by Ed Clarke Slides borrowed from P. Chauhan and C. Bartzis
Existential Graphs and Davis-Putnam April 3, 2002 Bram van Heuveln Department of Cognitive Science.
Methods of Proof Chapter 7, second half.
Planning II CSE 473. © Daniel S. Weld 2 Logistics Tournament! PS3 – later today Non programming exercises Programming component: (mini project) SPAM detection.
Search in the semantic domain. Some definitions atomic formula: smallest formula possible (no sub- formulas) literal: atomic formula or negation of an.
Last time Proof-system search ( ` ) Interpretation search ( ² ) Quantifiers Equality Decision procedures Induction Cross-cutting aspectsMain search strategy.
1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Satisfiability (Reading R&N: Chapter 7)
1 BLACKBOX: A New Approach to the Application of Theorem Proving to Problem Solving Bart Selman Cornell University Joint work with Henry Kautz AT&T Labs.
Knowledge Representation II (Inference in Propositional Logic) CSE 473 Continued…
1 Planning Chapters 11 and 12 Thanks: Professor Dan Weld, University of Washington.
Logic - Part 2 CSE 573. © Daniel S. Weld 2 Reading Already assigned R&N ch 5, 7, 8, 11 thru 11.2 For next time R&N 9.1, 9.2, 11.4 [optional 11.5]
Planning II CSE 573. © Daniel S. Weld 2 Logistics Reading for Wed Ch 18 thru 18.3 Office Hours No Office Hour Today.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Logics for Data and Knowledge Representation Propositional Logic: Reasoning Originally by Alessandro Agostini and Fausto Giunchiglia Modified by Fausto.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
1 Planning as Satisfiability Alan Fern * * Based in part on slides by Stuart Russell and Dana Nau  Review of propositional logic (see chapter 7)  Planning.
CHAPTERS 7, 8 Oliver Schulte Logical Inference: Through Proof to Truth.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 3 Logic Representations (Part 2)
Solvers for the Problem of Boolean Satisfiability (SAT) Will Klieber Aug 31, 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you.
1 Agenda Modeling problems in Propositional Logic SAT basics Decision heuristics Non-chronological Backtracking Learning with Conflict Clauses SAT and.
Planning as Propositional Satisfiabililty Brian C. Williams Oct. 30 th, J/6.834J GSAT, Graphplan and WalkSAT Based on slides from Bart Selman.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module Logic Representations.
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
LDK R Logics for Data and Knowledge Representation Propositional Logic: Reasoning First version by Alessandro Agostini and Fausto Giunchiglia Second version.
First-Order Logic and Inductive Logic Programming.
CS 5411 Compilation Approaches to AI Planning 1 José Luis Ambite* Some slides are taken from presentations by Kautz and Selman. Please visit their.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Oct, 30, 2015 Slide credit: some slides adapted from Stuart.
Classical Planning Chapter 10 Mausam / Andrey Kolobov (Based on slides of Dan Weld, Marie desJardins)
Planning as Satisfiability (SAT-Plan). SAT-Plan Translate the planning problem into a satisfiability problem for length n of Plan garb 0 (proposition)present.
Graphplan.
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
Planning 2 (SATPLAN) CSE 573. © Daniel S. Weld 2 Ways to make “plans” Generative Planning Reason from first principles (knowledge of actions) Requires.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Local Search Methods for SAT Geoffrey Levine March 11, 2004.
1 GraphPlan, Satplan and Markov Decision Processes Sungwook Yoon* * Based in part on slides by Alan Fern.
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Daniel Kroening and Ofer Strichman 1 Decision Procedures An Algorithmic Point of View Basic Concepts and Background.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
1 Chapter 6 Planning-Graph Techniques. 2 Motivation A big source of inefficiency in search algorithms is the branching factor  the number of children.
Inference in Propositional Logic (and Intro to SAT)
Fast Propositional Algorithms for Planning
Planning as Satisfiability
Planning as Search State Space Plan Space Algorihtm Progression
The Propositional Calculus
First-Order Logic and Inductive Logic Programming
Planning: Heuristics and CSP Planning
Planning as Satisfiability with Blackbox
Planning as Satisfiability
Chapter 6 Planning-Graph Techniques
Graphplan/ SATPlan Chapter
Graphplan/ SATPlan Chapter
Presentation transcript:

1 Chapter 7 Propositional Satisfiability Techniques

2 Motivation Propositional satisfiability: given a boolean formula »e.g., (P  Q)  (  Q  R  S)  (  R   P), does there exist a model »i.e., an assignment of truth values to the propositions that makes the formula true? This was the very first problem shown to be NP-complete Lots of research on algorithms for solving it  Algorithms are known for solving all but a small subset in average-case polynomial time Therefore,  Try translating classical planning problems into satisfiability problems, and solving them that way

3

4

5

6 Architecture of a SAT-based planner Compiler (encoding) satisfying model Plan mapping Increment plan length If unsatisfiable Problem Description Init State Goal State Actions CNF Simplifier (polynomial inference) Solver (SAT engine/s) Decoder CNF

7 Architecture of SAT-based planning system (Cont.) Compiler  take a planning problem as input, guess a plan length, and generate a propositional logic formula, which if satisfied, implies the existence of a solution plan Simplifier  use fast techniques such as unit clause propagation and pure literal elimination to shrink the CNF formula Solver  use systematic or stochastic methods to find a satisfying assignment. If the formula is unsatisfiable, then the compiler generates a new encoding reflecting a longer plan length Decoder  translate the result of solver into a solution plan.

8 Overall Approach A bounded planning problem is a pair (P,n):  P is a planning problem; n is a positive integer  Any solution for P of length n is a solution for (P,n) Planning algorithm: Do iterative deepening like we did with Graphplan:  for n = 0, 1, 2, …, »encode (P,n) as a satisfiability problem  »if  is satisfiable, then From the set of truth values that satisfies , a solution plan can be constructed, so return it and exit

9 Notation For satisfiability problems we need to use propositional logic Need to encode ground atoms into propositions  For set-theoretic planning we encoded, for example, at(r1,loc1) into at-r1-loc1 We do the same thing here, but we don’t bother to rewrite  Just write the proposition as at(r1,loc1)

10 Fluents If π =  a 0, a 1, …, a n–1  is a solution for (P,n), it generates these states: s 0, s 1 =  (s 0,a 0 ), s 2 =  (s 1,a 1 ), …, s n =  (s n–1, a n–1 ) Fluent: proposition saying a particular atom is true in a particular state  at(r1,loc1, i ) is a fluent that’s true iff at(r1,loc1) is in s i  We’ll use l i to denote the fluent for literal l in state s i »e.g., if l = at(r1,loc1) then l i = at(r1,loc1, i )  a i is a fluent saying that a is the i’th step of π »e.g., if a = move(r1,loc2,loc1) then a i = move(r1,loc2,loc1, i )

11 Encoding Planning Problems Encode (P,n) as a formula  such that π =  a 0, a 1, …, a n–1  is a solution for (P,n) if and only if  can be satisfied in a way that makes the fluents a 0, …, a n–1 true Let  A = {all actions in the planning domain}  S = {all states in the planning domain}  L = {all literals in the language}  is the conjunct of many other formulas …

12 Formulas in  Formula describing the initial state: /\ {l 0 | l  s 0 }  /\ {  l 0 | l  L – s 0 } Formula describing the goal: /\ {l n | l  g + }  /\ {  l n | l  g – } For every action a in A, formulas describing what changes a would make if it were the i’th step of the plan:  a i  /\ {p i | p  Precond(a)}  /\ {e i+1 | e  Effects(a)} Complete exclusion axiom:  For all actions a and b, formulas saying they can’t occur at the same time  a i   b i  this guarantees there can be only one action at a time Is this enough?

13 Frame Axioms Classical Frame axioms:  Formulas describing what doesn’t change between steps i and i+1 Explanatory frame axioms  One axiom for every literal l  Says that if l changes between s i and s i+1, then the action at step i must be responsible: (  l i  l i+1  V a in A {a i | l  effects + (a)})  (l i   l i+1  V a in A {a i | l  effects – (a)})

14 Encodings of Planning to SAT Frame Axioms Classical: (McCarthy & Hayes 1969)  state what fluents are left unchanged by an action  clear(d, i) ^ move(a, b, c, i) => clear(d, i+1)  Problem: if no action occurs at step i nothing can be inferred about propositions at level i+1  Sol: at-least-one axiom: at least one action occurs Explanatory: (Haas 1987)  State the causes for a fluent change  clear(d, i) ^ not clear(d, i+1) => (move(a, b, d, i) v move(a, c, d, i) v … v move(c, Table, d, i))

15 Example Planning domain:  one robot r1  two adjacent locations l1, l2  one operator (move the robot) Encode (P,n) where n = 1  Initial state:{ at(r1,l1) } Encoding: at(r1,l1,0)   at(r1,l2,0)  Goal:{ at(r1,l2) } Encoding: at(r1,l2,1)   at(r1,l1,1)  Operator: see next slide

16 Example (continued) Operator: move(r,l,l’) precond: at(r,l) effects: at(r,l’),  at(r,l) Encoding: move(r1,l1,l2,0)  at(r1,l1,0)  at(r1,l2,1)   at(r1,l1,1) move(r1,l2,l1,0)  at(r1,l2,0)  at(r1,l1,1)   at(r1,l2,1) move(r1,l1,l1,0)  at(r1,l1,0)  at(r1,l1,1)   at(r1,l1,1) move(r1,l2,l2,0)  at(r1,l2,0)  at(r1,l2,1)   at(r1,l2,1) move(l1,r1,l2,0)  … move(l2,l1,r1,0)  … move(l1,l2,r1,0)  … move(l2,l1,r1,0)  … How to avoid generating the last four actions?  Assign data types to the constant symbols like we did for state-variable representation nonsensical contradictions (easy to detect)

17 Example (continued) Locations: l1, l2 Robots: r1 Operator: move(r : robot, l : location, l’ : location) precond: at(r,l) effects: at(r,l’),  at(r,l) Encoding: move(r1,l1,l2,0)  at(r1,l1,0)  at(r1,l2,1)   at(r1,l1,1) move(r1,l2,l1,0)  at(r1,l2,0)  at(r1,l1,1)   at(r1,l2,1)

18 Example (continued) Complete-exclusion axiom:  move(r1,l1,l2,0)   move(r1,l2,l1,0) Explanatory frame axioms:  at(r1,l1,0)  at(r1,l1,1)  move(r1,l2,l1,0)  at(r1,l2,0)  at(r1,l2,1)  move(r1,l1,l2,0) at(r1,l1,0)   at(r1,l1,1)  move(r1,l1,l2,0) at(r1,l2,0)   at(r1,l2,1)  move(r1,l2,l1,0)

19 Extracting a Plan Suppose we find an assignment of truth values that satisfies .  This means P has a solution of length n For i=1,…,n, there will be exactly one action a such that a i = true  This is the i’th action of the plan. Example (from the previous slides):   can be satisfied with move(r1,l1,l2,0) = true  Thus  move(r1,l1,l2,0)  is a solution for (P, 1) »It’s the only solution - no other way to satisfy 

20 Birthday Dinner Example Goal:  garb ^ dinner ^ present Init: garb ^ clean ^ quiet Actions:  Cook »Pre: clean »Effect: dinner  Wrap »Pre: quiet »Effect: present  Carry »Pre: »Effect:  garb ^  clean  Dolly »Pre: »Effect:  garb ^  quiet

21 Constructing SATPLAN sentence Initial sentence (clauses): garb 0, clean 0, quiet 0,  present 0,  dinner 0 Goal (at depth 2):  garb 2, present 2, dinner 2 Action t  (Pre t  Eff t+1 ) [in clause form]  Cook 0  (clean 0  dinner 1 ) Explanatory Frame Axioms: For every state change, say what could have caused it  garb 0   garb 1  (dolly 0 v carry 0 ) [in clause form] Complete exclusion axiom: For all actions a and b, add  a t v  b t  this guarantees there can be only one action at a time

22 Planning How to find an assignment of truth values that satisfies  ?  Use a satisfiability algorithm Example: the Davis-Putnam algorithm  First need to put  into conjunctive normal form e.g.,  = D  (  D  A   B)  (  D   A   B)  (  D   A  B)  A  Write  as a set of clauses (disjuncts of literals)  = {D, (  D  A   B), (  D   A   B), (  D   A  B), A}  Two special cases: »  = {} is a formula that’s always true »  = {…, (), …} is a formula that’s always false

23

24 SAT solvers Systematic SAT solvers  DPLL algorithm  Perform a backtracking depth-first search through the space of partial truth assignment, using unit-clause and pure-literal heuristics Stochastic SAT solvers  Search locally using random moves to escape from local minima.  incomplete  GSAT: perform a greedy search. After hill climbing for a fixed amount of flips, it starts anew with a freshly generated, random assignment.  WALKSAT: improve GSAT by adding additional randomness akin to simulated annealing

25 The Davis-Putnam Procedure Depth-first backtracking through combinations of truth values Select a variable P in  Recursive call on   P Simplify: remove all occurrences of  P If  = {…, (), …} then backtrack Else if  = {} then have a solution Recursive call on    P Simplify: remove all occurrences of P If  = {…, (), …} then backtrack Else if  = {} then have a solution

26

27

28 Local Search Let u be an assignment of truth values to all of the variables  cost(u,  ) = number of clauses in  that aren’t satisfied by u  flip(P,u) = u with the truth value of P reversed Local search:  Select a random assignment u  while cost(u,  ) ≠ 0 »if there is a P such that cost(flip(P,u),  ) < cost(u,  ) then randomly choose any such P u  flip(P,u) »else return failure Local search is sound If it finds a solution it will find it very quickly Local search is not complete: can get trapped in local minima

29 GSAT Basic-GSAT:  Select a random assignment u  while cost(u,  ) ≠ 0 »choose the P that minimizes cost(flip(P,u),  ) Not guaranteed to terminate GSAT:  restart after a max number of flips  return failure after a max number of restarts Walksat  works better than both local search and GSAT

30 Walksat For i=1 to max-tries A:= random truth assigment For j=1 to max-flips If solution?(A) then return A else C:= random unsatisfied clause With probability p flip a random variable in C With probability (1- p) flip the variable in C that minimizes the number of unsatisfied clauses

31 Example Consider the following formula (Solution = {D,  A,  B})   = D  (  D  A   B)  (  D   A   B)  (  D   A  B)  (D  A) Local search select a random total assignment, e.g., u = {D, A, B}, under which only (  D   A   B) is unsatisfied, and Cost(u,  ) = 1 There is no u’ such that |u – u’| = 1 and Cost(u’,  ) < 1 Therefore search stops with failure If the initial guess was {  D,  A,  B}, the search could find solution in one step

32 Example (Cont.) Consider the following formula (Solution = {D,  A,  B})   = D  (  D  A   B)  (  D   A   B)  (  D   A  B)  (D  A) Basic GSAT selects a random total assignment, say, u = {D, A, B}, It has two alternatives: flipping D or B (because corresponding cost is 1) (Flipping A has a cost of 2), if B is flipped, next A will be chosen, and solution is found

33 Example (Cont.) Consider the following formula (Solution = {D,  A,  B})   = D  (  D  A   B)  (  D   A   B)  (  D   A  B)  (D  A) Iterative-Repair (A simplified version of Walksat) with the same initial guess u = {D, A, B}, It has to repair clause (  D   A   B) Different assignments can repair this, one is the solution. Suppose u= {  D,  A,  B} is selected, it then must repair two clauses D and (D  A), If D is selected then iterative repair finds the solution, but if (D  A) is selected, it will have two choices one of the leads to solution

34 Discussion Recall the overall approach:  for n = 0, 1, 2, …, »encode (P,n) as a satisfiability problem  »if  is satisfiable, then From the set of truth values that satisfies , extract a solution plan and return it How well does this work?

35 Discussion Recall the overall approach:  for n = 0, 1, 2, …, »encode (P,n) as a satisfiability problem  »if  is satisfiable, then From the set of truth values that satisfies , extract a solution plan and return it How well does this work?  By itself, not very practical (takes too much memory and time) But it can be combined with other techniques  e.g., planning graphs

36 Parameters of SAT-based planner Encoding of Planning Problem into SAT  Frame Axioms  Action Encoding Encoding is important to the performance of Solver, since solver speed can be exponential in the size of the formula. Simplification SAT Solver(s)

37 Action Encoding RepresentationOne Propositional Variable per Example Regularfully-instantiated action Move(r1, l1, l2) Simply Splittingfully-instantiated action’s argument Move(Arg1, r1)  Move(Arg2, ll)  Move(Arg3, l2) Overloaded Splittingfully-instantiated argument Act(Move)  Arg1(r1)  Arg2(l1)  Arg3(l2) BitwiseBinary encodings of actions Bit1  ~Bit2  Bit3 ( Move(r1, l1, l2) = 5) more vars more clauses

38 Linear Encoding Initial and Goal States Action implies both preconditions and its effects Only one action at a time Some action occurs at each time (allowing for do-nothing actions) Classical frame axioms Operator Splitting

39 Graphplan-based Encoding Goal holds at last layer (time step) Initial state holds at layer 0 Fact at level i implies disjuntion of all operators at level i–1 that have it as an add-efffect Operators imply their preconditions Conflicting Actions (only action mutex explicit, fact mutex implicit)

40 Graphplan Encoding Fact => Act1  Act2 Act1 => Pre1  Pre2 ¬Act1  ¬Act2 Act1 Act2 Fact Pre1 Pre2

41 Compare Graphplan with SAT Both approaches convert parameterized action schemata into a finite propositional structure representing the space of possible plans up to a given length  The planning graph  a CNF formula Both approaches use local consistency methods before resorting to exhaustive search  mutex propagation  Propositional simplification Both approaches iteratively expand their propositional structure until they find a solution  planning graph is extended when no solution is found  propositional logic formula is recreated for a longer plan length Planning graph can be automatically converted into CNF notation for solution with SAT solvers

42 Comparison with Plan Space Planning Plan Space planning  <5 primitive actions in solutions  Works best if few interactions between goals Constraint-based planning  Graphplan, SATPLAN, + descendents  100+ primitive actions in solutions  Moderate time horizon <30 time steps  Handles interacting goals well

43 BlackBox The BlackBox procedure combines planning-graph expansion and satisfiability checking  It is roughly as follows: for n = 0, 1, 2, …  Graph expansion: »create a “planning graph” that contains n “levels”  Check whether the planning graph satisfies a necessary (but insufficient) condition for plan existence  If it does, then »Encode (P,n) as a satisfiability problem  but include only the actions in the planning graph »If  is satisfiable then return the solution

44 More about BlackBox Memory requirement still is combinatorially large, but less than satisfiability alone It was one of the two fastest planners in the 1998 planning competition

45 Graph Search vs. SAT SATPLAN Graphplan Problem size / complexity Time Blackbox with solver schedule