Backtracking search: look-back

Slides:



Advertisements
Similar presentations
Exploiting SAT solvers in unbounded model checking
Advertisements

Chaff: Engineering an Efficient SAT Solver Matthew W.Moskewicz, Concor F. Madigan, Ying Zhao, Lintao Zhang, Sharad Malik Princeton University Modified.
Presented by Monissa Mohan 1.  A highly optimized BCP algorithm  Two watched literals  Fast Backtracking  Efficient Decision Heuristic  Focused on.
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
Introduction to MiniSat v1.14 Presented by Yunho Kim Provable Software Lab, KAIST.
This lecture topic (two lectures) Chapter 6.1 – 6.4, except 6.3.3
1 Finite Constraint Domains. 2 u Constraint satisfaction problems (CSP) u A backtracking solver u Node and arc consistency u Bounds consistency u Generalized.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Proofs from SAT Solvers Yeting Ge ACSys NYU Nov
Daniel Kroening and Ofer Strichman 1 Decision Procedures An Algorithmic Point of View SAT.
1/30 SAT Solver Changki PSWLAB SAT Solver Daniel Kroening, Ofer Strichman.
IBM Labs in Haifa © 2005 IBM Corporation Adaptive Application of SAT Solving Techniques Ohad Shacham and Karen Yorav Presented by Sharon Barner.
Boolean Satisfiability Solvers Wonhong Nam
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Presented by Ed Clarke Slides borrowed from P. Chauhan and C. Bartzis
Chaff: Engineering an Efficient SAT Solver Matthew W.Moskewicz, Concor F. Madigan, Ying Zhao, Lintao Zhang, Sharad Malik Princeton University Presenting:
Constraint Satisfaction Problems
GRASP-an efficient SAT solver Pankaj Chauhan. 6/19/ : GRASP and Chaff2 What is SAT? Given a propositional formula in CNF, find an assignment.
State-of-the-art in SAT solvers
GRASP SAT solver Presented by Constantinos Bartzis Slides borrowed from Pankaj Chauhan J. Marques-Silva and K. Sakallah.
Search in the semantic domain. Some definitions atomic formula: smallest formula possible (no sub- formulas) literal: atomic formula or negation of an.
Backtracking search: look-back Chapter 6. Look-back: Backjumping / Learning Backjumping: In deadends, go back to the most recent culprit. Learning: constraint-recording,
1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Satisfiability (Reading R&N: Chapter 7)
Stochastic greedy local search Chapter 7 ICS-275 Spring 2007.
1 Abstraction Refinement for Bounded Model Checking Anubhav Gupta, CMU Ofer Strichman, Technion Highly Jet Lagged.
Knowledge Representation II (Inference in Propositional Logic) CSE 473 Continued…
SAT Solving Presented by Avi Yadgar. The SAT Problem Given a Boolean formula, look for assignment A for such that.  A is a solution for. A partial assignment.
1 Message Passing and Local Heuristics as Decimation Strategies for Satisfiability Lukas Kroc, Ashish Sabharwal, Bart Selman (presented by Sebastian Brand)
CHAPTERS 7, 8 Oliver Schulte Logical Inference: Through Proof to Truth.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE COS302 MICHAEL L. LITTMAN FALL 2001 Satisfiability.
Solvers for the Problem of Boolean Satisfiability (SAT) Will Klieber Aug 31, 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you.
1 Agenda Modeling problems in Propositional Logic SAT basics Decision heuristics Non-chronological Backtracking Learning with Conflict Clauses SAT and.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module Logic Representations.
Boolean Satisfiability Present and Future
Chapter 5 Constraint Satisfaction Problems
Stochastic greedy local search Chapter 7 ICS-275 Spring 2009.
SAT Solver Heuristics. SAT-solver History Started with David-Putnam-Logemann-Loveland (DPLL) (1962) –Able to solve variable problems Satz (Chu Min.
Accelerating Random Walks Wei Wei and Bart Selman.
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
Satisfiability and SAT Solvers CS 270 Math Foundations of CS Jeremy Johnson.
Lecture 5: Constraint Satisfaction Problems
SAT Solving As implemented in - DPLL solvers: GRASP, Chaff and
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Dynamic Backtracking for SAT Presented by: Phil Oertel April 6, 2004.
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
SAT for Software Model Checking Introduction to SAT-problem for newbie
Inference in Propositional Logic (and Intro to SAT)
Clause Learning in a SAT-Solver
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Constraint Satisfaction
First-Order Logic and Inductive Logic Programming
Local Search Strategies: From N-Queens to Walksat
Introduction to Software Verification
Heuristics for Efficient SAT Solving
Constraint Propagation
CS 416 Artificial Intelligence
Introduction to the Boolean Satisfiability Problem
ECE 667 Synthesis and Verification of Digital Circuits
Artificial Intelligence: Agents and Propositional Logic.
Notes 6: Constraint Satisfaction Problems
Constraint Satisfaction Problems
Decision Procedures An Algorithmic Point of View
A Progressive Approach for Satisfiability Modulo Theories
Backtracking search: look-back
Introduction to the Boolean Satisfiability Problem
Constraints and Search
Chapter 5: General search strategies: Look-ahead
Artificial Intelligence
GRASP-an efficient SAT solver
Presentation transcript:

Backtracking search: look-back ICS 179 Spring 2010

Look-back: Backjumping / Learning In deadends, go back to the most recent culprit. Learning: constraint-recording, no-good recording. good-recording

Backjumping (X1=r,x2=b,x3=b,x4=b,x5=g,x6=r,x7={r,b}) (r,b,b,b,g,r) conflict set of x7 (r,-,b,b,g,-) c.s. of x7 (r,-,b,-,-,-,-) minimal conflict-set Leaf deadend: (r,b,b,b,g,r) Every conflict-set is a no-good Radu, can you improve that?

Gaschnig jumps only at leaf-dead-ends Internal dead-ends: dead-ends that are non-leaf

Gaschnig jumps only at leaf-dead-ends Internal dead-ends: dead-ends that are non-leaf

Backjumping styles Jump at leaf only (Gaschnig 1977) Context-based Graph-based (Dechter, 1990) Jumps at leaf and internal dead-ends, graph information Conflict-directed (Prosser 1993) Context-based, jumps at leaf and internal dead-ends

Gaschnig’s backjumping: Culprit variable If a_i is a leaf deadend and x_b its culprit variable, then a_b is a safe backjump destination and a_j, j<b is not. The culprit of x7 (r,b,b,b,g,r) is (r,b,b)  x3

Gaschnig’s backjumping Implementation [1979] Gaschnig uses a marking technique to compute culprit. Each variable xj maintains a pointer (latest_j) to the latest ancestor incompatible with any of its values. While forward generating , keep array latest_i, 1<=j<=n, of pointers to the last value conflicted with some value of x_j The algorithm jumps from a leaf-dead-end x_{i+1} back to latest_(i+1) which is its culprit.

Graph-based backjumping scenarios Internal deadend at X4 } , { ) ( 3 1 4 5 6 7 x I = Graph-based backjumping scenarios Internal deadend at X4 Scenario 1, deadend at x4: Scenario 2: deadend at x5: Scenario 3: deadend at x7: Scenario 4: deadend at x6:

Graph-based backjumping Uses only graph information to find culprit Jumps both at leaf and at internal dead-ends Whenever a deadend occurs at x, it jumps to the most recent variable y connected to x in the graph. If y is an internal deadend it jumps back further to the most recent variable connected to x or y. The analysis of conflict is approximated by the graph. Graph-based algorithm provide graph-theoretic bounds.

Graph-based backjumping on DFS orderings

DFS of graph and induced graphs Spanning-tree of a graph; DFS spanning trees, BFS spanning trees.

Complexity of Backjumping uses pseudo-tree analysis Simple: always jump back to parent in pseudo tree Complexity for csp: exp(tree-depth) Complexity for csp: exp(w*log n)

Look-back: No-good Learning Learning means recording conflict sets used as constraints to prune future search space. (x1=2,x2=2,x3=1,x4=2) is a dead-end Conflicts to record: (x1=2,x2=2,x3=1,x4=2) 4-ary (x3=1,x4=2) binary (x4=2) unary

Learning, constraint recording Learning means recording conflict sets An opportunity to learn is when deadend is discovered. Goal of learning to not discover the same deadends. Try to identify small conflict sets Learning prunes the search space.

Learning example

Learning Issues Learning styles Non-systematic randomized learning Graph-based or context-based i-bounded, scope-bounded Relevance-based Non-systematic randomized learning Implies time and space overhead Applicable to SAT

Complexity of Backtrack-Learning for CSP The complexity of learning along d is time and space exponential in w*(d): The number of dead-ends is bounded by Number of constraint tests per dead-end are Space complexity is Time complexity is

Complexity of Backtrack-Learning for CSP The complexity of learning along d is time and space exponential in w*(d): The number of dead-ends is bounded by Number of constraint tests per dead-end are Space complexity is Time complexity is Learning and backjumping: O(n m e k^w*(d)) m- depth of tree, e- number of constraints

Non-Systematic Randomized Learning Do search in a random way with interupts, restarts, unsafe backjumping, but record conflicts. Guaranteed completeness.

Relationships between various backtracking algrithms

Empirical comparison of algorithms Benchmark instances Random problems Application-based random problems Generating fixed length random k-sat (n,m) uniformly at random Generating fixed length random CSPs (N,K,T,C) also arity, r.

Average case complexity of Random problems We need distribution P: the probability that a literal will be in a clause Those can be solved in polynomial time on the average Generate random k-cnf ; fixed length formulas having m clauses, all of size 3 (or k). Selected uniformy at random. What will be the performance when number of clauses is small? When it is large? In between? Pick at m/n =4.2 Also at that point probability of satisfiability shifts from 1 to 0.

The Phase transition (m/n)

Some empirical evaluation Sets 1-3 reports average over 2000 instances of random csps from 50% hardness. Set 1: 200 variables, set 2: 300, Set 3: 350. All had 3 values.: Dimacs problems

State-of-the-art in SAT solvers Vibhav Gogate

SAT formulas A set of propositional variables and clauses involving variables (x1+x2’+x3) and (x2+x1’+x4) x1,x2, x3 and x4 are variables (True or false) Literals: Variable and its negation x1 and x1’ A clause is satisfied if one of the literals is true x1=true satisfies clause 1 x1=false satisfies clause 2 Solution: An assignment that satisfies all clauses

SAT solvers Given 10 minutes of time Started with DPLL (1962) Able to solve 10-15 variable problems Satz (Chu Min Li, 1995) Able to solve some 1000 variable problems Chaff (Malik et al., 2001) Intelligently hacked DPLL , Won the 2004 competition Able to solve some 10000 variable problems Current state-of-the-art Minisat and SATELITEGTI (Chalmer’s university, 2004-2006) Jerusat and Haifasat (Intel Haifa, 2002) Ace (UCLA, 2004-2006)

DPLL Example {p,r},{p,q,r},{p,r} {T,r},{T,q,r},{T,r} p=T p=F {T,r},{T,q,r},{T,r} {F,r},{F,q,r},{F,r} SIMPLIFY SIMPLIFY {q,r} {r},{r} SIMPLIFY {}

DPLL Algorithm as seen by SAT solver While (1) { if (decide_next_branch()) { //1. Branching while (deduce()==conflict) { //2. Deducing blevel=analyze_conflicts() // 3. Learning if (blevel < 0) return UNSAT else backtrack(blevel) // 4. Backtracking } else RETURN UNSAT;

Chaff implementation While (1) { if (decide_next_branch()) { //1. Branching while (deduce()==conflict) { //2. Deducing blevel=analyze_conflicts() // 3. Learning if (blevel < 0) return UNSAT else backtrack(blevel) // 4. Backtracking } else RETURN UNSAT; Use conflict-directed backjumping + Learning

Learning Adding information about the instance into the solution process without changing the satisfiability of the problem. In CNF representation it is accomplished by adding clauses into the clause database Knowledge of failure may help search in other spaces Learning is very effective in pruning the search space for structured problems It is of limited use for random instances Why? Still an open question

Chaff implementation While (1) { if (decide_next_branch()) { //1. Branching while (deduce()==conflict) { //2. Deducing blevel=analyze_conflicts() // 3. Learning if (blevel < 0) return UNSAT else backtrack(blevel) // 4. Backtracking } else RETURN UNSAT; Boolean constraint propagation: Main factor

Naive Implementation of Deduce or Unit propagation Check every clause after an assignment is made and reduce it if possible Repeat if a unit clause is generated (implication) After backtrack, revert all clauses to their original form as they were before. Very slow. A solver would spend 85-90% of the time doing unit propagation Why not speed it up?

Chaff implementation While (1) { if (decide_next_branch()) { //1. Branching while (deduce()==conflict) { //2. Deducing blevel=analyze_conflicts() // 3. Learning if (blevel < 0) return UNSAT else backtrack(blevel) // 4. Backtracking } else RETURN UNSAT; Variable ordering heuristics

Other issues Clause Deletion Winner of SAT competition 2004. Learned clauses slows down the bcp and eat up memory Delete clauses periodically Various heuristics for this purpose Winner of SAT competition 2004.

But Chaff no longer State of the Art More hacking Minisat (2006, winner of SAT RACE 2006) Based on chaff but a better faster implementation Some new things like conflict analysis and minimization but basically same as chaff

Benchmarks Random Crafted Industrial

Stochastic greedy local search Chapter 7

Example: 8-queen problem

Main elements Choose a full assignment and iteratively improve it towards a solution Requires a cost function: number of unsatisfied constraints or clasuses. Neural networks use energy minimization Drawback: local minimas Remedy: introduce a random element Cannot decide inconsistency

Algorithm Stochastic Local search (SLS)

Example: CNF Example: z divides y,x,t z = {2,3,5}, x,y = {2,3,4}, t={2,5,6}

Heuristics for improving local search Plateau search: at local minima continue search sideways. Constraint weighting: use weighted cost function The cost C_i is 1 if no violation. At local minima increase the weights of violating constraints. Tabu search: prevent backwards moves by keeping a list of assigned variable-values. Tie-breaking rule may be conditioned on historic information: select the value that was flipped least recently Automating Max-flips: Based on experimenting with a class of problems Given a progress in the cost function, allow the same number of flips used up to current progress.

Random walk strategies Combine random walk with greediness At each step: choose randomly an unsatisfied clause. with probability p flip a random variable in the clause, with (1-p) do a greedy step minimizing the breakout value: the number of new constraints that are unsatisfied

Figure 7.2: Algorithm WalkSAT

Example of walkSAT: start with assignment of true to all vars

Properties of local search Guarantee to terminate at local minima Random walk on 2-sat is guaranteed to converge with probability 1 after N^2 steps, when N is the number of variables. Proof: A random assignment is on the average N/2 flips away from a satisfying assignment. There is at least ½ chance that a flip of a 2-clause will reduce the distance to a given satisfying assignment by 1. Random walk will cover this distance in N^2 steps on the average. Analysis breaks for 3-SAT Empirical evaluation shows good performance compared with complete algorithms (see chapter and numerous papers)