Logical Foundations of AI Planning as Satisfiability Clause Learning Backdoors to Hardness Henry Kautz.

Slides:



Advertisements
Similar presentations
Exploiting SAT solvers in unbounded model checking
Advertisements

Time-Space Tradeoffs in Resolution: Superpolynomial Lower Bounds for Superlinear Space Chris Beck Princeton University Joint work with Paul Beame & Russell.
1 Finite Constraint Domains. 2 u Constraint satisfaction problems (CSP) u A backtracking solver u Node and arc consistency u Bounds consistency u Generalized.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Proofs from SAT Solvers Yeting Ge ACSys NYU Nov
Finding Search Heuristics Henry Kautz. if State[node] is not in closed OR g[node] < g[LookUp(State[node],closed)] then A* Graph Search for Any Admissible.
Effective Propositional Reasoning CSE 473 – Autumn 2003.
Using Problem Structure for Efficient Clause Learning Ashish Sabharwal, Paul Beame, Henry Kautz University of Washington, Seattle April 23, 2003.
Time-Space Tradeoffs in Resolution: Superpolynomial Lower Bounds for Superlinear Space Chris Beck Princeton University Joint work with Paul Beame & Russell.
Daniel Kroening and Ofer Strichman 1 Decision Procedures An Algorithmic Point of View SAT.
Planning with Constraints Graphplan & SATplan Yongmei Shi.
1 Backdoor Sets in SAT Instances Ryan Williams Carnegie Mellon University Joint work in IJCAI03 with: Carla Gomes and Bart Selman Cornell University.
Planning as Satisfiability cse 473 sp 06. SATPLAN cnf formula satisfying model plan mapping length STRIPS problem description SAT engine encoder interpreter.
Logical Foundations of AI SAT Henry Kautz. Resolution Refutation Proof DAG, where leaves are input clauses Internal nodes are resolvants Root is false.
SAT and Model Checking. Bounded Model Checking (BMC) A.I. Planning problems: can we reach a desired state in k steps? Verification of safety properties:
4 th Nov, Oct 23 rd Happy Deepavali!. 10/23 SAT & CSP.
08/1 Foundations of AI 8. Satisfiability and Model Construction Davis-Putnam, Phase Transitions, GSAT Wolfram Burgard and Bernhard Nebel.
Constraint Satisfaction and the Davis-Putnam-Logeman-Loveland Procedure Henry Kautz.
CSE 5731 Lecture 21 State-Space Search vs. Constraint- Based Planning CSE 573 Artificial Intelligence I Henry Kautz Fall 2001.
Presented by Ed Clarke Slides borrowed from P. Chauhan and C. Bartzis
1 Planning. R. Dearden 2007/8 Exam Format  4 questions You must do all questions There is choice within some of the questions  Learning Outcomes: 1.Explain.
GRASP-an efficient SAT solver Pankaj Chauhan. 6/19/ : GRASP and Chaff2 What is SAT? Given a propositional formula in CNF, find an assignment.
The Theory of NP-Completeness
GRASP SAT solver Presented by Constantinos Bartzis Slides borrowed from Pankaj Chauhan J. Marques-Silva and K. Sakallah.
Search in the semantic domain. Some definitions atomic formula: smallest formula possible (no sub- formulas) literal: atomic formula or negation of an.
1 Backdoors To Typical Case Complexity Ryan Williams Carnegie Mellon University Joint work with: Carla Gomes and Bart Selman Cornell University.
Last time Proof-system search ( ` ) Interpretation search ( ² ) Quantifiers Equality Decision procedures Induction Cross-cutting aspectsMain search strategy.
1 BLACKBOX: A New Paradigm for Planning Bart Selman Cornell University.
1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Satisfiability (Reading R&N: Chapter 7)
1 BLACKBOX: A New Approach to the Application of Theorem Proving to Problem Solving Bart Selman Cornell University Joint work with Henry Kautz AT&T Labs.
SAT Solving Presented by Avi Yadgar. The SAT Problem Given a Boolean formula, look for assignment A for such that.  A is a solution for. A partial assignment.
1 Combinatorial Problems in Cooperative Control: Complexity and Scalability Carla Gomes and Bart Selman Cornell University Muri Meeting March 2002.
Logic - Part 2 CSE 573. © Daniel S. Weld 2 Reading Already assigned R&N ch 5, 7, 8, 11 thru 11.2 For next time R&N 9.1, 9.2, 11.4 [optional 11.5]
Classical Planning Chapter 10.
CS444-Autumn of 20 Planning as Satisfiability Henry Kautz University of Rochester in collaboration with Bart Selman and Jöerg Hoffmann.
Satisfiability Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001.
Boolean Satisfiability and SAT Solvers
Performing Bayesian Inference by Weighted Model Counting Tian Sang, Paul Beame, and Henry Kautz Department of Computer Science & Engineering University.
SAT and SMT solvers Ayrat Khalimov (based on Georg Hofferek‘s slides) AKDV 2014.
1 Planning as Satisfiability Alan Fern * * Based in part on slides by Stuart Russell and Dana Nau  Review of propositional logic (see chapter 7)  Planning.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE COS302 MICHAEL L. LITTMAN FALL 2001 Satisfiability.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 3 Logic Representations (Part 2)
Solvers for the Problem of Boolean Satisfiability (SAT) Will Klieber Aug 31, 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you.
Motivation & Goal SAT and Constraint Processing (CP) are fundamental areas of Computer Science that address the same computational questions. Compare SAT.
1 The LPSAT Engine and its Application to Metric Planning Steve Wolfman University of Washington CS&E Advisor: Dan Weld.
Planning as Propositional Satisfiabililty Brian C. Williams Oct. 30 th, J/6.834J GSAT, Graphplan and WalkSAT Based on slides from Bart Selman.
Lazy Annotation for Program Testing and Verification Speaker: Chen-Hsuan Adonis Lin Advisor: Jie-Hong Roland Jiang November 26,
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module Logic Representations.
On the Relation between SAT and BDDs for Equivalence Checking Sherief Reda Rolf Drechsler Alex Orailoglu Computer Science & Engineering Dept. University.
CS 5411 Compilation Approaches to AI Planning 1 José Luis Ambite* Some slides are taken from presentations by Kautz and Selman. Please visit their.
SAT 2009 Ashish Sabharwal Backdoors in the Context of Learning (short paper) Bistra Dilkina, Carla P. Gomes, Ashish Sabharwal Cornell University SAT-09.
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
SAT Solving As implemented in - DPLL solvers: GRASP, Chaff and
AAAI of 20 Deconstructing Planning as Satisfiability Henry Kautz University of Rochester in collaboration with Bart Selman and Jöerg Hoffmann.
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
1 P NP P^#P PSPACE NP-complete: SAT, propositional reasoning, scheduling, graph coloring, puzzles, … PSPACE-complete: QBF, planning, chess (bounded), …
Satisfiability Modulo Theories and DPLL(T) Andrew Reynolds March 18, 2015.
Inference in Propositional Logic (and Intro to SAT)
Inference and search for the propositional satisfiability problem
CS 4700: Foundations of Artificial Intelligence
Planning as Satisfiability
Planning as Search State Space Plan Space Algorihtm Progression
CSE P573 Introduction to Artificial Intelligence Class 1 Planning & Search INTRODUCE MYSELF Relative new faculty member, but 20 years in AI research.
Planning as Satisfiability with Blackbox
Planning as Satisfiability
Introduction to the Boolean Satisfiability Problem
Introduction to the Boolean Satisfiability Problem
Graphplan/ SATPlan Chapter
GRASP-an efficient SAT solver
Presentation transcript:

Logical Foundations of AI Planning as Satisfiability Clause Learning Backdoors to Hardness Henry Kautz

SATPLAN cnf formula satisfying model plan mapping length STRIPS problem description SAT engine encoder interpreter

Translating STRIPS Ground action = a STRIPS operator with constants assigned to all of its parameters Ground fluent = a precondition or effect of a ground action operator: Fly(a,b) precondition: At(a), Fueled effect: At(b), ~At(a), ~Fueled constants: NY, Boston, Seattle Ground actions: Fly(NY,Boston), Fly(NY,Seattle), Fly(Boston,NY), Fly(Boston,Seattle), Fly(Seattle,NY), Fly(Seattle,Boston) Ground fluents: Fueled, At(NY), At(Boston), At(Seattle)

Translating STRIPS Ground action = a STRIPS operator with constants assigned to all of its parameters Ground fluent = a precondition or effect of a ground action operator: Fly(a,b) precondition: At(a), Fueled effect: At(b), ~At(a), ~Fueled constants: NY, Boston, Seattle Ground actions: Fly(NY,Boston), Fly(NY,Seattle), Fly(Boston,NY), Fly(Boston,Seattle), Fly(Seattle,NY), Fly(Seattle,Boston) Ground fluents: Fueled, At(NY), At(Boston), At(Seattle)

Clause Schemas

Existential Quantification

Named Sets

SAT Encoding Time is sequential and discrete –Represented by integers –Actions occur instantaneously at a time point –Each fluent is true or false at each time point If an action occurs at time i, then its preconditions must hold at time i If an action occurs at time i, then its effects must hold at time i+1 If a fluent changes its truth value from time i to time i+1, one of the actions with the new value as an effect must have occurred at time i Two conflicting actions cannot occur at the same time The initial state holds at time 0, and the goals hold at a given final state K

SAT Encoding If an action occurs at time i, then its preconditions must hold at time i operator: Fly(a,b) precondition: At(a), Fueled effect: At(b), ~At(a), ~Fueled cities: NY, Boston, Seattle

SAT Encoding If an action occurs at time i, then its preconditions must hold at time i operator: Fly(a,b) precondition: At(a), Fueled effect: At(b), ~At(a), ~Fueled cities: NY, Boston, Seattle

SAT Encoding If an action occurs at time i, then its effects must hold at time i+1 operator: Fly(a,b) precondition: At(a), Fueled effect: At(b), ~At(a), ~Fueled constants: NY, Boston, Seattle

SAT Encoding If an action occurs at time i, then its effects must hold at time i+1 operator: Fly(a,b) precondition: At(a), Fueled effect: At(b), ~At(a), ~Fueled constants: NY, Boston, Seattle

SAT Encoding If a fluent changes its truth value from time i to time i+1, one of the actions with the new value as an effect must have occurred at time i operator: Fly(a,b) precondition: At(a), Fueled effect: At(b), ~At(a), ~Fueled cities: NY, Boston, Seattle

SAT Encoding If a fluent changes its truth value from time i to time i+1, one of the actions with the new value as an effect must have occurred at time i operator: Fly(a,b) precondition: At(a), Fueled effect: At(b), ~At(a), ~Fueled cities: NY, Boston, Seattle

SAT Encoding If a fluent changes its truth value from time i to time i+1, one of the actions with the new value as an effect must have occurred at time i Change from true to false: operator: Fly(a,b) precondition: At(a), Fueled effect: At(b), ~At(a), ~Fueled cities: NY, Boston, Seattle

Action Mutual Exclusion Two conflicting actions cannot occur at the same time –Actions conflict if one modifies a precondition or effect of another operator: Fly(a,b) precondition: At(a), Fueled effect: At(b), ~At(a), ~Fueled cities: NY, Boston, Seattle

Action Mutual Exclusion Two conflicting actions cannot occur at the same time –Actions conflict if one modifies a precondition or effect of another operator: Fly(a,b) precondition: At(a), Fueled effect: At(b), ~At(a), ~Fueled cities: NY, Boston, Seattle

Satplan Demo (blackbox)

The IPC-4 Domains Airport: control the ground traffic [Hoffmann & Trüg] Pipesworld: control oil product flow in a pipeline network [Liporace & Hoffmann] Promela: find deadlocks in communication protocols [Edelkamp] PSR: resupply lines in a faulty electricity network [Thiebaux & Hoffmann] Satellite & Settlers [Fox & Long], additional Satellite versions with time windows for sending data [Hoffmann] UMTS: set up applications for mobile terminals [Edelkamp & Englert]

The Competitors: Optimal planners

PSR

Dining Philosophers

Airport

Hosted at International Conference on Automated Planning and Scheduling Whistler, June 6, 2004 Stefan Edelkamp Jörg Hoffmann IPC-4 Co-Chairs Classical Part Performance Award: 1st Prize, Optimal Track Henry Kautz, David Roznyai, Farhad Teydaye-Saheli, Shane Neth and Michael Lindmark “SATPLAN04”

Power of Modern DPLL Solvers Branching heuristics Choose variable to assign that would causes the greatest amount of simplification Simple frequency heuristic: choose variables by the frequency with which they occur in original formula Current frequency heuristic: branch on a variable that occurs most frequently in the current set of clauses "MOMS" heuristics: branch on a variable that maximum number of occurrences in clauses of the shortest length in the current set of clauses

Power of Modern DPLL Solvers Clause Learning DPLL only finds tree-shaped proofs There might be much smaller DAG proofs Idea: when DPLL backtracks, determine the particular choices that resulted in [ ] Record these choices as a derived clause, add it back to the set of clauses Result: proof corresponding to DPLL search tree becomes DAG-ish

How: Clause Learning Idea: backtrack search can repeatedly reach an empty clause (backtrack point) for the same reason A BB CC  Example: Propagation from B=1 and C=1 leads to empty clause

How: Clause Learning If reason was remembered, then could avoid having to rediscover it A BB CD  Example: Propagation from B=1 and C=1 leads to empty clause I had better set C=0 immediately!

How: Clause Learning The reason can be remembered by adding a new learned clause to the formula A BB CD  learn (~B V ~C) Example: Propagation from B=1 and C=1 leads to empty clause Set C=0 by unit propagation

How: Clause Learning Savings can be huge A B B C  Example: Setting B=1 is the root cause of reaching an empty clause DE  C  DE 

How: Clause Learning Savings can be huge A B C C  Example: Setting B=1 is the root cause of reaching an empty clause DE  Have learned unit clause (~B)

Determining Reasons The reason for a backtrack can be found by (incrementally) constructing a graph that records the results of unit propagations Cuts in the graph yield learned clauses

Scaling Up Clause Learning Clause learning greatly enhances the power of unit propagation Tradeoff: memory needed for the learned clauses, time needed to check if they cause propagations Clever data structures enable modern SAT solvers to manage millions of learned clauses efficiently

Why Do These Techniques Work So Well? "Backdoors" to Complexity SAT encodings of problems in planning, verification, and other real-world domains often contain 100,000's of variables However: it is often the case that setting just a few variables dramatically simplifies the formula –Branching heuristics: choosing the right variables –Clause learning: increases amount of simplification performed by unit propagation

Backdoors can be surprisingly small: Most recent: Other combinatorial domains. E.g. graphplan planning, near constant size backdoors (2 or 3 variables) and log(n) size in certain domains. (Hoffmann, Gomes, Selman ’04) Backdoors capture critical problem resources (bottlenecks).

Backdoors --- “seeing is believing” Logistics_b.cnf planning formula. 843 vars, 7,301 clauses, approx min backdoor 16 (backdoor set = reasoning shortcut) Constraint graph of reasoning problem. One node per variable: edge between two variables if they share a constraint.

Logistics.b.cnf after setting 5 backdoor vars.

After setting just 12 (out of 800+) backdoor vars – problem almost solved.

MAP-6-7.cnf infeasible planning instances. Strong backdoor of size vars, 2,578 clauses. Another example

After setting 2 (out of 392) backdoor vars. --- reducing problem complexity in just a few steps!

Inductive inference problem --- ii16a1.cnf vars, 19,368 clauses. Backdoor size 40. Last example.

After setting 6 backdoor vars.

After setting 38 (out of 1600+) backdoor vars: Some other intermediate stages: So: Real-world structure hidden in the network. Can be exploited by automated reasoning engines.