Satisfiability and State- Transition Systems: An AI Perspective Henry Kautz University of Washington.

Slides:



Advertisements
Similar presentations
Automatic Verification Book: Chapter 6. What is verification? Traditionally, verification means proof of correctness automatic: model checking deductive:
Advertisements

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
UIUC CS 497: Section EA Lecture #2 Reasoning in Artificial Intelligence Professor: Eyal Amir Spring Semester 2004.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Logic.
Planning with Constraints Graphplan & SATplan Yongmei Shi.
1 Backdoor Sets in SAT Instances Ryan Williams Carnegie Mellon University Joint work in IJCAI03 with: Carla Gomes and Bart Selman Cornell University.
Dynamic Restarts Optimal Randomized Restart Policies with Observation Henry Kautz, Eric Horvitz, Yongshao Ruan, Carla Gomes and Bart Selman.
1 Classical STRIPS Planning Alan Fern * * Based in part on slides by Daniel Weld.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
3/25  Monday 3/31 st 11:30AM BYENG 210 Talk by Dana Nau Planning for Interactions among Autonomous Agents.
CP Formal Models of Heavy-Tailed Behavior in Combinatorial Search Hubie Chen, Carla P. Gomes, and Bart Selman
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
CSE 5731 Lecture 21 State-Space Search vs. Constraint- Based Planning CSE 573 Artificial Intelligence I Henry Kautz Fall 2001.
1 Planning. R. Dearden 2007/8 Exam Format  4 questions You must do all questions There is choice within some of the questions  Learning Outcomes: 1.Explain.
1 Backdoors To Typical Case Complexity Ryan Williams Carnegie Mellon University Joint work with: Carla Gomes and Bart Selman Cornell University.
1 BLACKBOX: A New Paradigm for Planning Bart Selman Cornell University.
Encoding Domain Knowledge in the Planning as Satisfiability Framework Bart Selman Cornell University.
1 Compute-Intensive Methods in AI: New Opportunities for Reasoning and Search Bart Selman Cornell University
1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Satisfiability (Reading R&N: Chapter 7)
1 BLACKBOX: A New Approach to the Application of Theorem Proving to Problem Solving Bart Selman Cornell University Joint work with Henry Kautz AT&T Labs.
Knowledge Representation II (Inference in Propositional Logic) CSE 473.
Knowledge Representation II (Inference in Propositional Logic) CSE 473 Continued…
Logic - Part 2 CSE 573. © Daniel S. Weld 2 Reading Already assigned R&N ch 5, 7, 8, 11 thru 11.2 For next time R&N 9.1, 9.2, 11.4 [optional 11.5]
Distributions of Randomized Backtrack Search Key Properties: I Erratic behavior of mean II Distributions have “heavy tails”.
Planning as Satisfiability CS Outline 0. Overview of Planning 1. Modeling and Solving Planning Problems as SAT - SATPLAN 2. Improved Encodings using.
Scalable Knowledge Representation and Reasoning Systems Henry Kautz AT&T Shannon Laboratories.
The Role of Domain-Specific Knowledge in the Planning as Satisfiability Framework Henry Kautz AT&T Labs Bart Selman Cornell University.
Logical Foundations of AI Planning as Satisfiability Clause Learning Backdoors to Hardness Henry Kautz.
Boolean Satisfiability and SAT Solvers
Performing Bayesian Inference by Weighted Model Counting Tian Sang, Paul Beame, and Henry Kautz Department of Computer Science & Engineering University.
Learning Control Knowledge for Planning Yi-Cheng Huang.
1 Chapter 7 Propositional Satisfiability Techniques.
1 Planning as Satisfiability Alan Fern * * Based in part on slides by Stuart Russell and Dana Nau  Review of propositional logic (see chapter 7)  Planning.
CHAPTERS 7, 8 Oliver Schulte Logical Inference: Through Proof to Truth.
Solvers for the Problem of Boolean Satisfiability (SAT) Will Klieber Aug 31, 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Open-Loop Planning as Satisfiability Henry Kautz AT&T Labs.
Heavy-Tailed Phenomena in Satisfiability and Constraint Satisfaction Problems by Carla P. Gomes, Bart Selman, Nuno Crato and henry Kautz Presented by Yunho.
Planning as Propositional Satisfiabililty Brian C. Williams Oct. 30 th, J/6.834J GSAT, Graphplan and WalkSAT Based on slides from Bart Selman.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
On the Relation between SAT and BDDs for Equivalence Checking Sherief Reda Rolf Drechsler Alex Orailoglu Computer Science & Engineering Dept. University.
CS 5411 Compilation Approaches to AI Planning 1 José Luis Ambite* Some slides are taken from presentations by Kautz and Selman. Please visit their.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
SAT 2009 Ashish Sabharwal Backdoors in the Context of Learning (short paper) Bistra Dilkina, Carla P. Gomes, Ashish Sabharwal Cornell University SAT-09.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Nikolaj Bjørner Microsoft Research DTU Winter course January 2 nd 2012 Organized by Flemming Nielson & Hanne Riis Nielson.
Accelerating Random Walks Wei Wei and Bart Selman.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
AAAI of 20 Deconstructing Planning as Satisfiability Henry Kautz University of Rochester in collaboration with Bart Selman and Jöerg Hoffmann.
Local Search Methods for SAT Geoffrey Levine March 11, 2004.
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Inference in Propositional Logic (and Intro to SAT)
Inference and search for the propositional satisfiability problem
Planning as Satisfiability
Planning as Search State Space Plan Space Algorihtm Progression
The Propositional Calculus
Planning as Satisfiability with Blackbox
Objective of This Course
Planning as Satisfiability
Compute-Intensive Methods in AI: New Opportunities for Reasoning and Search Bart Selman Cornell University 1 1.
Presentation transcript:

Satisfiability and State- Transition Systems: An AI Perspective Henry Kautz University of Washington

Introduction Both the AI and CADE/CAV communities have long been concerned with reasoning about state-transition systems AI – Planning CADE/CAV – Hardware and software verification Recently propositional satisfiability testing has turned out to be surprisingly powerful tool Planning – SATPLAN (Kautz & Selman) Verification – Bounded model checking (Clarke), Debugging relational specifications (Jackson)

Shift in KR&R Traditional approach: specialized languages / specialized reasoning algorithms New direction: Compile combinatorial reasoning problems into a common propositional form (SAT) Apply new, highly efficient general search engines Combinatorial Task SAT Encoding SAT Solver Decoder

Advantages Rapid evolution of fast solvers 1990: 100 variable hard SAT problems 2000: 100,000 variables Sharing of algorithms and implementations from different fields of computer science AI, theory, CAD, OR, CADE, CAV, … Competitions - Germany 91 / China 96 / DIMACS- 93/97/98 JAR Special Issues – SAT 2000 RISC vs CISC Can compile control knowledge into encodings

OUTLINE 1. Planning  Model Checking 2. Planning as Satisfiability 3. SAT + Petri Nets + Randomization = Blackbox 4. State of the Art 5. Using Domain-Specific Control Knowledge 6. Learning Domain-Specific Control Knowledge GOAL: Overview of recent advances in planning that may (or may not!) be relevant to the CADE community!

1. Planning  Model Checking

The AI Planning Problem Given a world description, set of primitive actions, and goal description (utility function), synthesize a control program to achieve those goals (maximize utility) most general case covers huge area of computer science, OR, economics program synthesis, control theory, decision theory, optimization …

STRIPS Style Planning “Classic” work in AI has concentrated on STRIPS style planning (“state space”) Open loop – no sensing Deterministic actions Sequential (straight line) plans SHAKEY THE ROBOT (Fikes & Nilsson 1971) Terminology Fluent – a time varying proposition, e.g. “on(A,B)” State – complete truth assignment to a set of fluents Goal – partial truth assignment (set of states) Action – a partial function State  State specified by Operator schemas

Operator Schemas Each yields set of primitive actions, when instantiated over a given finite set of objects (constants) Pickup(x, y) precondition: on(x,y), clear(x), handempty delete: on(x,y), clear(x), handempty add: holding(x), clear(y) Plan: A (shortest) sequence of actions that transforms the initial state into a goal state E.g.: Pickup(A,B); Putdown(A,C)

Parallelism Useful extension: parallel composition of primitive actions Only allowed when all orderings are well defined and equivalent – no shared pre / effects (act1 || act2)(s) = act2(act1(s)) = act1(act2(s)) Can dramatically reduce size of search space Easy to serialize Distinguish: –number of actions in a plan – “sequential length” –number of sequentially composition operators in a plan – “parallel length”, “horizon” (a1 || a2); (a3 || a4 || a5) ; a6 - sequential length 6, parallel length 3

Some Applications of STRIPS- Style Planning Autonomous systems Deep Space One Remote Agent (Williams & Nayak 1997) Natural language understanding TRAINS (Allen 1998) Internet agents Rodney (Etzioni 1994) Manufacturing Supply chain management (Crawford 1998)

Abdundance of Negative Complexity Results Unbounded STRIPS planning: PSPACE-complete Exponentially long solutions (Bylander 1991 ; Backstrom 1993) Bounded STRIPS planning: NP-complete Is there a solution of (sequential/parallel) length N? ( Chenoweth 1991; Gupta and Nau 1992) Domain-specific planning: may depend on whether solutions must be the shortest such plan Blocks world – –Shortest plan – NP-hard –Approximately shortest plan – NP-hard (Selman 1994) –Plan of length 2 x number blocks – Linear time

Approaches to AI Planning Three main paradigms: Forward-chaining heuristic search over state space –original STRIPS system –recent resurgence – TLPlan, FF, … “Causal link” Planning –search in “plan space” –Much work in 1990’s (UCPOP, NONLIN, …), little now Constraint based planning –view planning as solving a large set of constraints –constraints specify relationships between actions and their preconditions / effects –SATPLAN (Kautz & Selman), Graphplan (Blum & Furst)

Relationship to Model Checking Model checking – determine whether a formula in temporal logic evaluates to “true” in a Kripke structure described by a finite state machine FSM may be represented explicitly or symbolically STRIPS planning – special case where Finite state matchine (transition relation) specified by STRIPS operators –Very compact –Expressive – can translate many other representations of FSM’s into STRIPS with little or no blowup

Relationship, continued Formula to be checked is of the form “exists path. eventually. GOAL” –Reachability –Distinctions between linear / branching temporal logics not important Difference: Concentration on finding shortest plans Emphasis on efficiently finding single witness (plan) as opposed to verifying a property holds in all states –NP vs co-NP

Why Not Use OBDD’s? Size of OBDD explodes for typical AI benchmark domains Overkill – need not / cannot check all states, even if they are represented symbolically! O( 2 n 2 ) states (But see recent work by M. Velosa on using OBDD’s for non-deterministic variant of STRIPS)

Verification using SAT Similar phenomena occur in some verification domains Hardware multipliers Has led to interest in using SAT techniques for verification and bug finding Bounded – fixed horizon Under certain conditions can prove that only considering a fixed horizon is adequate –Empirically, most bugs found with small bounds E. Clarke – Bounded Model Checking –LTL specifications, FSM in SMV language D. Jackson – Nitpick –Debugging relational specifications in Z

2. Planning as Satisfiability

Planning as Satisfiability SAT encodings are designed so that plans correspond to satisfying assignments Use recent efficient satisfiability procedures (systematic and stochastic) to solve Evaluation performance on benchmark instances

SATPLAN axiom schemas instantiated propositional clauses satisfying model plan length problem description SAT engine(s) instantiate interpret

SAT Encodings Target: Propositional conjunctive normal form Sets of clauses specified by axiom schemas 1.Create model by hand 2.Compile STRIPS operators Discrete time, modeled by integers upper bound on number of time steps predicates indexed by time at which fluent holds / action begins –each action takes 1 time step –many actions may occur at the same step fly(Plane, City1, City2, i)  at(Plane, City2, i +1)

Solution to a Planning Problem A solution is specified by any model (satisfying truth assignment) of the conjunction of the axioms describing the initial state, goal state, and operators Easy to convert back to a STRIPS-style plan

Complete SAT Algorithms Davis-Putnam-Loveland-Logeman (DPLL) Depth-first backtrack search on partial truth assignments Basis of nearly all practical complete SAT algorithms –Exception: “Stahlmark’s method” Key to efficiency: good variable choice at branch points –1961 – unit propagation, pure literal rule – explosion of improved heuristics and implementations +MOM’s heuristic +satz (Chu Min Li) – lookhead to maximize rate of creation of binary clauses Dependency directed backtracking – derive new clauses during search – rel_sat (Bayardo), GRASP (di Silva) –See SATLIB 1998 / Hoos & Stutzle

Incomplete SAT Algorithms GSAT and Walksat (Kautz, Selman & Cohen 1993) Randomized local search over space of complete truth assignments Heuristic function: flip variables to minimize number of unsatisfied clauses Noisy “random walk” moves to escape local minima Provably solves 2CNF, empirically successful on a broad class of problems –random CNF, graph coloring, circuit synthesis encodings (DIMACS 1993, 1997)

Planning Benchmark Test Set Extension of Graphplan benchmark set logistics - transportation domain, ranging up to 14 time slots, unlimited parallelism 2,165 possible actions per time slot optimal solutions containing 74 primitive actions legal states (60,000 Boolean variables) Problems of this size not previously handled by any domain-independent planning system

Initial SATPLAN Results problemhorizon / actions Graphplannaïve SAT encoding hand SAT encoding rocket-b7 / 309 min16 min41 sec log-a11 / 4713 min58 min1.2 min log-b13 / 5432 min*1.3 min log-c13 / 63**1.7 min log-d14 / 74**3.5 min SAT solver: Walksat (local search) * indicates no solution found after 24 hours

How SATPLAN Spent its Time probleminstantiationwalksatDPLLsatz rocket-b41 sec0.04 sec1.8 sec0.3 sec log-a1.2 min2.2 sec*1.7 min log-b1.3 min3.4 sec*0.6 sec log-c1.7 min2.1 sec*4.3 sec log-d3.5 min7.2 sec*1.8 hours Hand created SAT encodings * indicates no solution found after 24 hours

3. SAT + Petri Nets + Randomization = Blackbox

Automating Encodings While SATPLAN proved the feasibility of planning using satisfiability, modeling the transition function was problematic Direct naïve encoding of STRIPS operators as axiom schemas gave poor performance Handcrafted encodings gave good performance, but were labor intensive to create –similar issues arise in work in verification – division of labor between user and model checker! GOAL: fully automatic generation and solution of planning problems from STRIPS specifications

Graphplan Graphplan (Blum & Furst 1995) Set new paradigm for planning Like SATPLAN... Two phases: instantiation of propositional structure, followed by search Unlike SATPLAN... Efficient instantiation algorithm based on Petri-net type reachability analysis Employs specialized search engine Neither approach best for all domains Can we combine advantages of both?

Blackbox STRIPS Plan Graph Petri Net Analysis CNF General SAT engines Solution Simplifier Translator CNF

Component 1: Petri-Net Analysis Graphplan instantiates a “plan graph” in a forward direction, pruning (some) unreachable nodes plan graph  unfolded Petri net (McMillian 1992) Polynomial-time propagation of mutual- exclusion relationships between nodes Incomplete – must be followed by search to determine if all goals can be simultaneously reached

Growing the Plan Graph P0 facts actions

Growing the Plan Graph P0A1 B1 P2 R2 facts actions Q2

Growing the Plan Graph P0A1 B1 P2 R2 C3 facts actions Q2

Growing the Plan Graph P0A1 B1 P2 R2 C3 facts actions Q2

Growing the Plan Graph P0A1 B1 P2 R2 C3 facts actions Q2

Growing the Plan Graph P0A1 B1 P2 R2 facts actions Q2

Component 2: Translation P0A1 B1 P2 R2 facts actions Q2 Action implies preconditions: A1  P0, B1  P0 Mutual exclusion:  A1   B1,  P2   Q2 Initial facts hold at time 0 Goals holds at time n

Component 3: Simplification Generated wff can be further simplified by more general consistency propagation techniques unit propagation: is Wff inconsistant by resolution against unit clauses? O(n) failed literal rule: is Wff + { P } inconsistant by unit propagation? O(n 2 ) binary failed literal rule: is Wff + { P V Q } inconsistant by unit propagation? O(n 3 ) General simplification techniques complement Petri net analysis

Effective of Simplification

Component 3: Randomized Systematic Solvers

Background Combinatorial search methods often exhibit a remarkable variability in performance. It is common to observe significant differences between: different heuristics same heuristic on different instances different runs of same heuristic with different random seeds

How SATPLAN Spent its Time probleminstantiationwalksatDPLLsatz rocket-b41 sec0.04 sec1.8 sec0.3 sec log-a1.2 min2.2 sec*1.7 min log-b1.3 min3.4 sec*0.6 sec log-c1.7 min2.1 sec*4.3 sec log-d3.5 min7.2 sec*1.8 hours Hand created SAT encodings * indicates no solution found after 24 hours

Preview of Strategy We’ll put variability / unpredictability to our advantage via randomization / averaging.

Cost Distributions Consider distribution of running times of backtrack search on a large set of “equivalent” problem instances renumber variables change random seed used to break ties Observation (Gomes 1996): distributions often have heavy tails infinite variance mean increases without limit probability of long runs decays by power law (Pareto-Levy), rather than exponentially (Normal)

Heavy Tails Bad scaling of systematic solvers can be caused by heavy tailed distributions Deterministic algorithms get stuck on particular instances but that same instance might be easy for a different deterministic algorithm! Expected (mean) solution time increases without limit over large distributions Log-log plot of distribution of running times approximately linear

Heavy-Tailed Distributions … infinite variance … infinite mean Introduced by Pareto in the 1920’s “probabilistic curiosity” Mandelbrot established the use of heavy-tailed distributions to model real-world fractal phenomena stock-market, Internet traffic delays, weather New discovery: good model for backtrack search algorithms formal statement of “folk wisdom” of theorem proving community

Randomized Restarts Solution: randomize the systematic solver Add noise to the heuristic branching (variable choice) function Cutoff and restart search after a fixed number of backtracks Provably Eliminates heavy tails In practice: rapid restarts with low cutoff can dramatically improve performance (Gomes, Kautz, and Selman 1997, 1998) Related analysis: Luby & Zuckerman 1993; Alt & Karp 1996

Rapid Restart on LOG.D Note Log Scale: Exponential speedup!

Overall insight: Randomized tie-breaking with rapid restarts can boost systematic search algorithms Speed-up demonstrated in many versions of Davis-Putnam –basic DPLL, satz, rel_sat, … Related analysis: Luby & Zuckerman 1993; Alt & Karp 1996

Blackbox Results problemnaïve SAT encoding hand SAT encoding blackbox walksat blackbox satz-rand rocket-b16 min41 sec2.5 sec4.9 sec log-a58 min1.2 min7.4 sec5.2 sec log-b*1.3 min1.7 min7.1 sec log-c*1.7 min15 min9.3 sec log-d*3.5 min*52 sec Naïve/Hand SAT solver: Walksat (local search) * indicates no solution found after 24 hours

4. State of the Art

Which Strategies Work Best? Causal-link planning <5 primitive actions in solutions Works best if few interactions between goals Constraint-based planning Graphplan, SATPLAN, + descendents 100+ primitive actions in solutions Moderate time horizon <30 time steps Handles interacting goals well 1995 – 1999 Constraint-based approaches dominate AIPS 1996, AIPS 1998

Graph Search vs. SAT SATPLAN Graphplan Problem size / complexity Time Caveat: on some domains SAT approach can exhaust memory even though direct graph search is easy Blackbox with solver schedule

Resurgence of A* Search In most of 1980 – 1990’s forward chaining A* search was considered a non-starter for planning Voices in the wilderness: TLPlan (Bacchus) – hand-tuned heuristic function could make approach feasible LRTA (Geffner) – can automatically derive good heuristic functions Surprise – AIPS-2000 planning competition dominated by A* planners! What happened?

Solution Length vs Hardness Key issue: relationship between solution length and problem hardness RECALL: In many domains, finding solutions that minimize the number of time steps is NP-hard, while finding an arbitrary solution is in P –Put all the blocks on the table first –Deliver packages one at a time Long solutions minimize goal interactions, so little or no backtracking required by forward-chaining search AIPS-2000 Planning Competition did not consider plan length criteria!

Non-Optimal Planning

Optimal-Length Planning

Which Works Best, Continued Constraint-based planning Short parallel solutions desired Many interactions between goals SAT translation a win for larger problems where time is dominated by search (as opposed to instantiation and Petri net analysis) Forward-chaining search Long sequential solutions okay Few interactions between goals Much recent progress in domain-independent planning… but further scaling to large real-world problems requires domain-dependent techniques!

5. Using Domain-Specific Control Knowledge

Kinds of Domain-Specific Knowledge Invariants true in every state A truck is only in one location Implicit constraints on optimal plans Do not remove a package from its destination location Simplifying assumptions Do not unload a package from an airplane, if the airplane is not at the package’s destination city –eliminates connecting flights

Expressing Knowledge Such information is traditionally incorporated in the planning algorithm itself Instead: use additional declarative axioms (Bacchus 1995; Kautz 1998; Huang, Kautz, & Selman 1999) Problem instance: operator axioms + initial and goal axioms + control axioms Control knowledge  constraints on search and solution spaces Independent of any search engine strategy

Axiomatic Form State Invariant: at(truck,loc1,i) & loc1  loc2   at(truck,loc2,i) Optimality: at(pkg,loc,i) &  at(pkg,loc,i+1) & i<j   at(pkg,loc,j) Simplifying Assumption incity(airport,city) & at(pkg,loc,goal) &  incity(airport,city)   unload(pkg,plane,airport)

Adding Control Knowledge Problem Specification Axioms Domain-specific Control Axioms Instantiated Clauses SAT Simplifier SAT Engine SAT “Core” As control knowledge increases, Core shrinks!

Effect of Domain Knowledge problemwalksatwalksat + Kx DPLLDPLL + Kx rocket-b0.04 sec 1.8 sec0.13 sec log-a2.2 sec0.11 sec*1.8 min log-b3.4 sec0.08 sec*11 sec log-c2.1 sec0.12 sec*7.8 min log-d7.2 sec1.1 sec** Hand created SAT encodings * indicates no solution found after 24 hours

6. Learning Domain-Specific Control Knowledge

Learning Control Rules Axiomatizing domain-specific control knowledge by hand is a time consuming art… Certain kinds of knowledge can be efficiently deduced –simple classes of invariants (Fox & Long; Gerevini & Schubert) Can more powerful control knowledge be automatically learned, by watching planner solve small instances?

Form of Rules We will learn two kinds of control rules, specified as temporal logic programs –(Huang, Selman, & Kautz 2000) Select rule: conditions under which an action must be performed at the current time instance Reject rule: conditions under which an action must not be performed at the current time instance incity(airport,city) & GOAL(at(pkg,loc)) &  incity(airport,city)   unload(pkg,plane,airport)

Training Examples Blackbox initially solves a few small problem instances Each instance yields POSITIVE training examples – states at which actions occur in the solution NEGATIVE training examples – states at which an action does NOT occur, even though its preconditions hold in that state Note that this data is very noisy!

Rule Induction Rules are induced using a version of Quinlan’s FOIL inductive logic programming algorithm Generates rules one literal at time Select rules: maximize coverage of positive examples, but do not cover negative examples Reject rules: maximize coverage of negative examples, but do not cover positive examples Prune rules that are inconsistent with any of the problem instances –For details, see “Learning Declarative Control Rules for Constraint-Based Planning”, Huang, Selman, & Kautz, ICML 2000

Logical Status of Induced Rules Some of the learned rules could in principle be deduced from the domain operators together with a bound on the length on the plan Reject rules for unnecessary actions But in general: rules are not deductive consequences Could rule out some feasible solutions In worst case: could rule out all solutions to some instances –not a problem in practice: such rules are usually quickly pruned in the training phase

Effect of Learning problemhorizonblackboxlearning blackbox grid-a grid-b gripper-315> gripper-419> log-d log-e mystery-108> mystery AIPS-98 competition benchmarks

Summary Close connections between much work in AI Planning and CADE/CAV work on model checking Remarkable recent success of general satisfiability testing programs on hard benchmark problems Success of Blackbox and Graphplan in combining ideas from planning and verification suggest many more synergies exist Techniques for learning and applying domain specific control knowledge dramatically boost performance for planning – could ideas also be applied to verification?