Dynamic Backtracking for SAT Presented by: Phil Oertel April 6, 2004.

Slides:



Advertisements
Similar presentations
Exploiting SAT solvers in unbounded model checking
Advertisements

Hybrid BDD and All-SAT Method for Model Checking Orna Grumberg Joint work with Assaf Schuster and Avi Yadgar Technion – Israel Institute of Technology.
Chaff: Engineering an Efficient SAT Solver Matthew W.Moskewicz, Concor F. Madigan, Ying Zhao, Lintao Zhang, Sharad Malik Princeton University Modified.
Presented by Monissa Mohan 1.  A highly optimized BCP algorithm  Two watched literals  Fast Backtracking  Efficient Decision Heuristic  Focused on.
Chaff: Engineering an Efficient SAT Solver Matthew W.Moskewicz, Concor F. Madigan, Ying Zhao, Lintao Zhang, Sharad Malik Princeton University Presenting:
Introduction to MiniSat v1.14 Presented by Yunho Kim Provable Software Lab, KAIST.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Proofs from SAT Solvers Yeting Ge ACSys NYU Nov
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Daniel Kroening and Ofer Strichman 1 Decision Procedures An Algorithmic Point of View SAT.
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
1/30 SAT Solver Changki PSWLAB SAT Solver Daniel Kroening, Ofer Strichman.
IBM Labs in Haifa © 2005 IBM Corporation Adaptive Application of SAT Solving Techniques Ohad Shacham and Karen Yorav Presented by Sharon Barner.
Hrinking hrinking A signment tack tack. Agenda Introduction Algorithm Description Heuristics Experimental Results Conclusions.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
Boolean Satisfiability Solvers Wonhong Nam
1 Boolean Satisfiability in Electronic Design Automation (EDA ) By Kunal P. Ganeshpure.
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Presented by Ed Clarke Slides borrowed from P. Chauhan and C. Bartzis
Chaff: Engineering an Efficient SAT Solver Matthew W.Moskewicz, Concor F. Madigan, Ying Zhao, Lintao Zhang, Sharad Malik Princeton University Presenting:
Constraint Satisfaction Problems
GRASP-an efficient SAT solver Pankaj Chauhan. 6/19/ : GRASP and Chaff2 What is SAT? Given a propositional formula in CNF, find an assignment.
Methods of Proof Chapter 7, second half.
GRASP SAT solver Presented by Constantinos Bartzis Slides borrowed from Pankaj Chauhan J. Marques-Silva and K. Sakallah.
Search in the semantic domain. Some definitions atomic formula: smallest formula possible (no sub- formulas) literal: atomic formula or negation of an.
Last time Proof-system search ( ` ) Interpretation search ( ² ) Quantifiers Equality Decision procedures Induction Cross-cutting aspectsMain search strategy.
1 Abstraction Refinement for Bounded Model Checking Anubhav Gupta, CMU Ofer Strichman, Technion Highly Jet Lagged.
SAT Solving Presented by Avi Yadgar. The SAT Problem Given a Boolean formula, look for assignment A for such that.  A is a solution for. A partial assignment.
Penn ESE 535 Spring DeHon 1 ESE535: Electronic Design Automation Day 21: April 21, 2008 Modern SAT Solvers ({z}Chaff, GRASP,miniSAT)
Satisfiability Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Boolean Satisfiability and SAT Solvers
SAT and SMT solvers Ayrat Khalimov (based on Georg Hofferek‘s slides) AKDV 2014.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE COS302 MICHAEL L. LITTMAN FALL 2001 Satisfiability.
Solvers for the Problem of Boolean Satisfiability (SAT) Will Klieber Aug 31, 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you.
1 Agenda Modeling problems in Propositional Logic SAT basics Decision heuristics Non-chronological Backtracking Learning with Conflict Clauses SAT and.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
CP Summer School Modelling for Constraint Programming Barbara Smith 2. Implied Constraints, Optimization, Dominance Rules.
Parallelizing MiniSat I-Ting Angelina Lee Justin Zhang May 05, Final Project Presentation.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Boolean Satisfiability Present and Future
Maximum Density Still Life Symmetries and Lazy Clause Generation Geoffrey Chu, Maria Garcia de la Banda, Chris Mears, Peter J. Stuckey.
Finding Models for Blocked 3-SAT Problems in Linear Time by Systematical Refinement of a Sub- Model Gábor Kusper Eszterházy Károly.
SAT Solver Heuristics. SAT-solver History Started with David-Putnam-Logemann-Loveland (DPLL) (1962) –Able to solve variable problems Satz (Chu Min.
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
SAT Solving As implemented in - DPLL solvers: GRASP, Chaff and
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
CS137: Electronic Design Automation
Hybrid BDD and All-SAT Method for Model Checking
Inference and search for the propositional satisfiability problem
Computer Science cpsc322, Lecture 13
Computability and Complexity
Parallelism in SAT Solvers
SAT’07 Conference, Lisbon;
Heuristics for Efficient SAT Solving
Computer Science cpsc322, Lecture 13
Complexity 6-1 The Class P Complexity Andrei Bulatov.
Unit Propagation and Variable Ordering in MiniSAT
Decision Procedures An Algorithmic Point of View
Unit Propagation and Variable Ordering in MiniSAT
Resolution Proofs for Combinational Equivalence
Satisfiability Solvers
Graphs and Algorithms (2MMD30)
Data Structures Unsorted Arrays
Methods of Proof Chapter 7, second half.
State-Space Searches.
State-Space Searches.
State-Space Searches.
GRASP-an efficient SAT solver
Presentation transcript:

Dynamic Backtracking for SAT Presented by: Phil Oertel April 6, 2004

Introduction / Problem Definition Boolean formula: f(v1,v2,v3, …) –Find assignment to vars such that f(t1,t2,t3, …) = T –Or prove that none exists: f(v1,v2,v3,…)= F ( always ) –Formula given in CNF. Running example: –f(v1,v2,v3) = –This syntax is nicer. A ‘Database’ of ‘clauses’: (1 2 -3) (1 -2) (-1) –This example is satisfiable with 1=F, 2=F, 3=F –Add the clause (1 2 3)  it becomes unsatisfiable

GSAT and Dynamic Backtracking (Ginsberg and McAllester, KR-94) Combines local, nonsystematic techniques (GSAT) with systematic DPLL-based techniques –Nonsystematic techniques are empirically effective but can become stuck in local minima and are not guaranteed to search the entire space. –Systematic methods are complete but are constrained to making sub-optimal decisions.

GSAT: A local search technique Begin with a random assignment of values to variables while (!sat) –flip the variable that results in the greatest decrease in the number of unsatisfied clauses To overcome local minima, the procedure periodically restarts with a new random assignment

Problems with GSAT Can get “stuck” in local minima: Because GSAT is not complete, we cannot say for certain if an instance is unsatisfiable Does not perform forward propagation –Note that propagation in the form of unit resolution would have solved the previous example easily (1, -2) (1, -3) (-1) Assume our initial assignment is {1=T, 2=T, 3=T}. The correct assignment is {1=F, 2=F, 3=F}. The correct flip will decrease the number of satisfied clauses.

Dependency-Directed Backtracking To make GSAT complete, we have to record which portions of the state space have been searched and not revisit them. Recording this information is known as Dependency-Directed Backtracking and allows the arbitrary movement required by GSAT. Problem: recorded information grows linearly with the size of the state space –Result: Exponential memory usage! This approach is therefore impractical

A Compromise: Dynamic Backtracking Simple DPLL-based searches use a fixed search order to record where what has been searched –Since all of this information is encoded in the search process, no database is required With an arbitrary search order, no information is recorded in the search process By recording some history in a database and some in the search process, can we have freedom of movement with polynomial memory usage?

Constraints and Nogoods A CSP is usually represented in CNF: a conjunction of allowable combinations We will use a slightly different form: a conjunction of combinations not allowed In the case of SAT, this is a simple application of deMorgan’s Laws

Constraints and Nogoods Once our constraints are expressed as negated conjunctions, we can make the following transformation: - OR - We will refer to a sentence of the form as a nogood.

Constraints and Nogoods When a set of assignments is found to be unsatisfiable, we can express that knowledge as a constraint. The next step in a local search algorithm will be to try to satisfy the new constraint by “flipping” one variable. We can express that by putting the flipped variable in the conclusion of the nogood: Example : is found to be unsatisfiable. Say we decide to flip z for our next assignment. The following nogood encodes this decision:

Resolving Nogoods New nogoods are produced by checking assignments (as in the previous example) and by resolution. Say we have derived the following: Assuming v1, v2, and v3 are the only values in u’s domain, then there is no solution with

Resolving Nogoods New nogoods are produced by checking assignments (as in the previous example) and by resolution. Say we have derived the following: We choose arbitrarily to flip the value of z, resulting in the following nogood:

Resolving Nogoods New nogoods are produced by checking assignments (as in the previous example) and by resolution. Say we have derived the following: We choose arbitrarily to flip the value of z, resulting in the following nogood: This nogood can be dropped!

Removing Nogoods Why were we able to drop that nogood? Because we only keep nogoods whose antecedents are consistent with the current state Once we flipped the value of z, z=c became inconsistent with our state. By resolving and removing, we can keep space requirements down to O(n 2 v) –n is the number of variables –v is the largest domain of any one variable

Variable Ordering The original implementation used a fixed variable ordering to ensure completeness Partial-order dynamic backtracking allows dynamic reordering, as long as an acyclic partial ordering is respected Yet another extension allows cycles in the partial ordering while maintaining polynomial space usage Due to time limitations, variable orderings are outside the scope of this presentation

Chaff: An Efficient SAT Solver Matthew Moskewicz Conor Madigan Ying Zhao Lintao Zhang Sharad Malik Published in DAC 2001

Boolean Constraint Propagation A clause containing a single literal is implied –Like (-1) in our example –We must make the literal true so assign 1=F –If any sat assignment exists, it must contain 1=F –Now with 1=F, we can simply the problem: Clauses where ‘-1’ appears are sat, so remove them: –Since 1+x+y+…=1 and f=1&c&d&…=c&d&… In clauses where ‘1’ appears, remove the ‘1’: –Since 0+x=x –Repeat until conflict or no more unit clauses Conflict means problem is unsatisfiable Empty database means problem is satisfied by our assignments

Decision What if BCP terminates with no conflict and a non-empty database? –Problem is still unknown Solution: ‘split’ formula P on some variable vi: –Use some heuristic to pick variable P => P[vi = 0] + P[vi = 1] If both sub-problems are unsat, P is unsat If either sub-problem is sat, P is sat Of course, may need to split sub-problems again.

Chaff Philosophy Chaff’s implementation was guided by profiling: –For a given problem, what dominates run time? –Make that part faster, then repeat. Guiding Implementation Principle: Laziness. –Delay computation as long as possible – but don’t violate the *semantics* of the algorithm –Do the common case fast – even if this violates the semantics. ‘Patch up’ semantics for uncommon cases.

Chaff contributions BCP algorithm – “2 literal watching” –Very important, not too complicated Decision heuristic – VSIDS –Simple algorithm, mostly heuristic BCP/Conflict analysis semantics –Only touched on here

BCP Algorithm (1/7) What ‘causes’ an implication? When can it occur? –All-but-one literals in a clause are assigned to 0 –For a clause of N literals, this can only occur after N-1 assignments. –So, (theoretically) we could completely ignore the first N-2 assignments to this clause. –In reality, pick 2 literals in each clause to ‘watch’ –This allows us to guarantee we catch the N-1 assignment without watching every one.

BCP Algorithm (2/7) So, for our example, here’s the initial state: Conceptual Convention: –The first two literals shown in each clause are the watched ones –Thus, changing which literals are watched is shown by reordering the literals in a clause (1 2 -3) Unit clause has only one literal. Handle as a special case. (1 -2) (-1) ( ) This clause is added to make the example more interesting Watched literals No assignments yet, but V[1] = 0 pending

BCP Algorithm (3/7) So, let’s process the assignment V[1]=0 –We only process watched literals set to 0 (-3 2 1) (1 -2) ( ) These two clauses are the ‘simple’ case: We just replace the newly assigned literal with some other unassigned literal in the clause. This clause is implied by this assignment. We don’t need to rearrange the watched Literals, but we do create a new pending Assignment. A = {V[1] = P = {V[2] = DL0 means “decision level 0”. The bottom of the decision stack.

BCP Algorithm (4/7) Now we do V[2]=0 (-3 2 1) (1 -2) ( ) Still the simple case This clause is sat by this assignment. We *do not even look at it* since neither watched lit is assigned to zero! A = {V[1] = 0, V[2] = P = {V[3] = Implied, no reordering.

BCP Algorithm (5/7) V[3] = 0 (-3 2 1) (1 -2) ( ) Still the simple case A = {V[1] = 0, V[2] = 0, P = {} No reason to even think of looking at these.

What now? BCP has terminated (no assigs pending) So make a decision to split the problem: (-3 2 1) (1 -2) ( ) A = {V[1] = 0, V[2] = 0, P = {V[4] = If there were no unsatisfied clauses, the problem would be sat. 4 and 5 are unassigned, so decide on one of them. Let’s choose V[4]=0 … it looks like an implication, but it’s on a higher ‘Decision Level:’

BCP Algorithm (6/7) V[4] = 0: (-3 2 1) (1 -2) ( ) Implied. A = {V[1] = 0, V[2] = 0, {V[4] = P = {V[5] = No reason to even think of looking at these. New implications get the ‘Decision Level’ of the assignment that triggered them, in this case Note the following property: Both watched literals of an implied clause are on the *highest* DL present in the clause.

BCP Algorithm (7/7) V[5] = 1: (-3 2 1) (1 -2) ( ) A = {V[1] = 0, V[2] = 0, {V[4] = 0, v[5] = P = {} No reason to even think of looking at these. BCP terminates without conflict again, so we try to make a decision. But there are no unassigned variables left! So the problem is sat by the current (complete) set of assignments

Conflict resolution Different example –A 2 var unsat problem: (1 -2) (-1 2) (1 2) A = {} P = {} (-1 -2) We must make a decision to start things off. Pick V[1] = 0, so now: P = {V[1] =

Conflict resolution -- Detection V[1] = 0 (1 -2) (-1 2) (1 2) A = {V[1] = P = {V[2] = 1, V[2] = (-1 -2) At this point, clearly something is ‘up.’ Before processing V[2] = 1, the BCP engine will notice that it is pending in both polarities, and exit with a conflict on V[2] Implied.

Conflict resolution -- Unwind Since the conflicting implications are on DL1, we must unwind (undo) all assignments on DL1 and higher, or else the conflicting assignments will still be pending. In this case, since there are no assignments at lower levels (DL0 would be the only possibility) we unwind all assignments. We will always unwind at least one level! (1 -2) (-1 2) (1 2) A = {} P = {} (-1 -2)

Conflict resolution – Clause addition The conflict analysis adds a clause that prevents duplication of the conflict. In this case, it happens to be a unit clause, but it is always implied immediately, thus forcing the search away from the conflict. (1 -2) (-1 2) (1 2) A = {} P = {V[1] = (-1 -2) (1)(1)

Conflict resolution – BCP resumes V[1] = 1 (1 -2) (-1 2) (1 2) A = {V[1] = P = {V[2] = 1, V[2] = (-1 -2) Again, there is a conflict. But since the conflict is on DL0, which cannot be unwound (since assignments on DL0 are not predicated on any decisions), the problem is unsat. Implied.

Decision Heuristic Variable State Independent Decaying Sums (VSIDS) Each variable in each polarity has a counter, initialized to 0 When a clause is added (or re-added) to the database, increment the counter of each literal in that clause Select the unassigned variable and polarity with highest counter value Periodically divide all counters by a constant

Decision Heuristic Quasi-static: –Static because it doesn’t depend on variables’ states –Not static because it gradually changes Variables ranked by appearance in recent conflicts –Fast data structures to find unassigned variable with the highest ranking in O(1) time –Why do all this? Even a single linear pass through variables on each decision would dominate run-time Also, seems to work fairly well in terms of # decisions

Decision Heuristic Implementation –Keep all unassigned vars sorted by heuristic value Can quickly (O(1)) pick variable at decision time Need to insert and remove variables as they get assigned or unassigned This is O(log(n)) per assignment, but only needs to be done for vars that change state between decisions –Important to do *nothing* during BCP Make sure data structure supports quickly visiting vars that have changed state since last decision

Interplay of BCP and Decision This is only an Intuitive description … –Reality depends heavily on specific instance Take some variable ranking (from the decision engine) –Assume several decisions are made Say 2=T, 7=F, 9=T, 1=T (and any implications thereof) –Then a conflict is encountered that forces 2=F The next decisions may still be 7=F, 9=T, 1=T ! But the BCP engine has recently processed these assignments … so these variables are unlikely to still be watched. Thus, the BCP engine *inherently does a differential update.* –And the Decision heuristic makes differential changes more likely to occur in practice.

Clause Deletion Keeps memory usage under 2GB Relevance based is (almost) free: –Simply ‘schedule’ clauses for deletion when they are added, based on the DLs on literals in the clause –Clauses get lazily deleted during unwind Marked as ‘unused’ and ignored in BCP –Memory recovered with a monolithic database compaction operation at infrequent intervals Somewhat of an ugly thing to do, but better than dynamic allocation.