Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dynamic Backtracking for SAT Presented by: Phil Oertel April 6, 2004.

Similar presentations


Presentation on theme: "Dynamic Backtracking for SAT Presented by: Phil Oertel April 6, 2004."— Presentation transcript:

1 Dynamic Backtracking for SAT Presented by: Phil Oertel April 6, 2004

2 Introduction / Problem Definition Boolean formula: f(v1,v2,v3, …) –Find assignment to vars such that f(t1,t2,t3, …) = T –Or prove that none exists: f(v1,v2,v3,…)= F ( always ) –Formula given in CNF. Running example: –f(v1,v2,v3) = –This syntax is nicer. A ‘Database’ of ‘clauses’: (1 2 -3) (1 -2) (-1) –This example is satisfiable with 1=F, 2=F, 3=F –Add the clause (1 2 3)  it becomes unsatisfiable

3 GSAT and Dynamic Backtracking (Ginsberg and McAllester, KR-94) Combines local, nonsystematic techniques (GSAT) with systematic DPLL-based techniques –Nonsystematic techniques are empirically effective but can become stuck in local minima and are not guaranteed to search the entire space. –Systematic methods are complete but are constrained to making sub-optimal decisions.

4 GSAT: A local search technique Begin with a random assignment of values to variables while (!sat) –flip the variable that results in the greatest decrease in the number of unsatisfied clauses To overcome local minima, the procedure periodically restarts with a new random assignment

5 Problems with GSAT Can get “stuck” in local minima: Because GSAT is not complete, we cannot say for certain if an instance is unsatisfiable Does not perform forward propagation –Note that propagation in the form of unit resolution would have solved the previous example easily (1, -2) (1, -3) (-1) Assume our initial assignment is {1=T, 2=T, 3=T}. The correct assignment is {1=F, 2=F, 3=F}. The correct flip will decrease the number of satisfied clauses.

6 Dependency-Directed Backtracking To make GSAT complete, we have to record which portions of the state space have been searched and not revisit them. Recording this information is known as Dependency-Directed Backtracking and allows the arbitrary movement required by GSAT. Problem: recorded information grows linearly with the size of the state space –Result: Exponential memory usage! This approach is therefore impractical

7 A Compromise: Dynamic Backtracking Simple DPLL-based searches use a fixed search order to record where what has been searched –Since all of this information is encoded in the search process, no database is required With an arbitrary search order, no information is recorded in the search process By recording some history in a database and some in the search process, can we have freedom of movement with polynomial memory usage?

8 Constraints and Nogoods A CSP is usually represented in CNF: a conjunction of allowable combinations We will use a slightly different form: a conjunction of combinations not allowed In the case of SAT, this is a simple application of deMorgan’s Laws

9 Constraints and Nogoods Once our constraints are expressed as negated conjunctions, we can make the following transformation: - OR - We will refer to a sentence of the form as a nogood.

10 Constraints and Nogoods When a set of assignments is found to be unsatisfiable, we can express that knowledge as a constraint. The next step in a local search algorithm will be to try to satisfy the new constraint by “flipping” one variable. We can express that by putting the flipped variable in the conclusion of the nogood: Example : is found to be unsatisfiable. Say we decide to flip z for our next assignment. The following nogood encodes this decision:

11 Resolving Nogoods New nogoods are produced by checking assignments (as in the previous example) and by resolution. Say we have derived the following: Assuming v1, v2, and v3 are the only values in u’s domain, then there is no solution with

12 Resolving Nogoods New nogoods are produced by checking assignments (as in the previous example) and by resolution. Say we have derived the following: We choose arbitrarily to flip the value of z, resulting in the following nogood:

13 Resolving Nogoods New nogoods are produced by checking assignments (as in the previous example) and by resolution. Say we have derived the following: We choose arbitrarily to flip the value of z, resulting in the following nogood: This nogood can be dropped!

14 Removing Nogoods Why were we able to drop that nogood? Because we only keep nogoods whose antecedents are consistent with the current state Once we flipped the value of z, z=c became inconsistent with our state. By resolving and removing, we can keep space requirements down to O(n 2 v) –n is the number of variables –v is the largest domain of any one variable

15 Variable Ordering The original implementation used a fixed variable ordering to ensure completeness Partial-order dynamic backtracking allows dynamic reordering, as long as an acyclic partial ordering is respected Yet another extension allows cycles in the partial ordering while maintaining polynomial space usage Due to time limitations, variable orderings are outside the scope of this presentation

16 Chaff: An Efficient SAT Solver Matthew Moskewicz Conor Madigan Ying Zhao Lintao Zhang Sharad Malik Published in DAC 2001

17 Boolean Constraint Propagation A clause containing a single literal is implied –Like (-1) in our example –We must make the literal true so assign 1=F –If any sat assignment exists, it must contain 1=F –Now with 1=F, we can simply the problem: Clauses where ‘-1’ appears are sat, so remove them: –Since 1+x+y+…=1 and f=1&c&d&…=c&d&… In clauses where ‘1’ appears, remove the ‘1’: –Since 0+x=x –Repeat until conflict or no more unit clauses Conflict means problem is unsatisfiable Empty database means problem is satisfied by our assignments

18 Decision What if BCP terminates with no conflict and a non-empty database? –Problem is still unknown Solution: ‘split’ formula P on some variable vi: –Use some heuristic to pick variable P => P[vi = 0] + P[vi = 1] If both sub-problems are unsat, P is unsat If either sub-problem is sat, P is sat Of course, may need to split sub-problems again.

19 Chaff Philosophy Chaff’s implementation was guided by profiling: –For a given problem, what dominates run time? –Make that part faster, then repeat. Guiding Implementation Principle: Laziness. –Delay computation as long as possible – but don’t violate the *semantics* of the algorithm –Do the common case fast – even if this violates the semantics. ‘Patch up’ semantics for uncommon cases.

20 Chaff contributions BCP algorithm – “2 literal watching” –Very important, not too complicated Decision heuristic – VSIDS –Simple algorithm, mostly heuristic BCP/Conflict analysis semantics –Only touched on here

21 BCP Algorithm (1/7) What ‘causes’ an implication? When can it occur? –All-but-one literals in a clause are assigned to 0 –For a clause of N literals, this can only occur after N-1 assignments. –So, (theoretically) we could completely ignore the first N-2 assignments to this clause. –In reality, pick 2 literals in each clause to ‘watch’ –This allows us to guarantee we catch the N-1 assignment without watching every one.

22 BCP Algorithm (2/7) So, for our example, here’s the initial state: Conceptual Convention: –The first two literals shown in each clause are the watched ones –Thus, changing which literals are watched is shown by reordering the literals in a clause (1 2 -3) Unit clause has only one literal. Handle as a special case. (1 -2) (-1) (1 2 3 4 5) This clause is added to make the example more interesting Watched literals No assignments yet, but V[1] = 0 pending

23 BCP Algorithm (3/7) So, let’s process the assignment V[1]=0 –We only process watched literals set to 0 (-3 2 1) (1 -2) (3 2 1 4 5) These two clauses are the ‘simple’ case: We just replace the newly assigned literal with some other unassigned literal in the clause. This clause is implied by this assignment. We don’t need to rearrange the watched Literals, but we do create a new pending Assignment. A = {V[1] = 0}@DL0 P = {V[2] = 0}@DL0 DL0 means “decision level 0”. The bottom of the decision stack.

24 BCP Algorithm (4/7) Now we do V[2]=0 (-3 2 1) (1 -2) (3 4 1 2 5) Still the simple case This clause is sat by this assignment. We *do not even look at it* since neither watched lit is assigned to zero! A = {V[1] = 0, V[2] = 0}@DL0 P = {V[3] = 0}@DL0 Implied, no reordering.

25 BCP Algorithm (5/7) V[3] = 0 (-3 2 1) (1 -2) (5 4 1 2 3) Still the simple case A = {V[1] = 0, V[2] = 0, V[3]}@DL0 P = {} No reason to even think of looking at these.

26 What now? BCP has terminated (no assigs pending) So make a decision to split the problem: (-3 2 1) (1 -2) (5 4 1 2 3) A = {V[1] = 0, V[2] = 0, V[3]}@DL0 P = {V[4] = 0}@DL1 If there were no unsatisfied clauses, the problem would be sat. 4 and 5 are unassigned, so decide on one of them. Let’s choose V[4]=0 … it looks like an implication, but it’s on a higher ‘Decision Level:’

27 BCP Algorithm (6/7) V[4] = 0: (-3 2 1) (1 -2) (5 4 1 2 3) Implied. A = {V[1] = 0, V[2] = 0, V[3]}@DL0 {V[4] = 0}@DL1 P = {V[5] = 1}@DL1 No reason to even think of looking at these. New implications get the ‘Decision Level’ of the assignment that triggered them, in this case V[4]=0@DL1. Note the following property: Both watched literals of an implied clause are on the *highest* DL present in the clause.

28 BCP Algorithm (7/7) V[5] = 1: (-3 2 1) (1 -2) (5 4 1 2 3) A = {V[1] = 0, V[2] = 0, V[3]}@DL0 {V[4] = 0, v[5] = 1}@DL1 P = {} No reason to even think of looking at these. BCP terminates without conflict again, so we try to make a decision. But there are no unassigned variables left! So the problem is sat by the current (complete) set of assignments

29 Conflict resolution Different example –A 2 var unsat problem: (1 -2) (-1 2) (1 2) A = {} P = {} (-1 -2) We must make a decision to start things off. Pick V[1] = 0, so now: P = {V[1] = 0}@DL1

30 Conflict resolution -- Detection V[1] = 0 (1 -2) (-1 2) (1 2) A = {V[1] = 0}@DL1 P = {V[2] = 1, V[2] = 0}@DL1 (-1 -2) At this point, clearly something is ‘up.’ Before processing V[2] = 1, the BCP engine will notice that it is pending in both polarities, and exit with a conflict on V[2] Implied.

31 Conflict resolution -- Unwind Since the conflicting implications are on DL1, we must unwind (undo) all assignments on DL1 and higher, or else the conflicting assignments will still be pending. In this case, since there are no assignments at lower levels (DL0 would be the only possibility) we unwind all assignments. We will always unwind at least one level! (1 -2) (-1 2) (1 2) A = {} P = {} (-1 -2)

32 Conflict resolution – Clause addition The conflict analysis adds a clause that prevents duplication of the conflict. In this case, it happens to be a unit clause, but it is always implied immediately, thus forcing the search away from the conflict. (1 -2) (-1 2) (1 2) A = {} P = {V[1] = 1}@DL0 (-1 -2) (1)(1)

33 Conflict resolution – BCP resumes V[1] = 1 (1 -2) (-1 2) (1 2) A = {V[1] = 1}@DL0 P = {V[2] = 1, V[2] = 0}@DL0 (-1 -2) Again, there is a conflict. But since the conflict is on DL0, which cannot be unwound (since assignments on DL0 are not predicated on any decisions), the problem is unsat. Implied.

34 Decision Heuristic Variable State Independent Decaying Sums (VSIDS) Each variable in each polarity has a counter, initialized to 0 When a clause is added (or re-added) to the database, increment the counter of each literal in that clause Select the unassigned variable and polarity with highest counter value Periodically divide all counters by a constant

35 Decision Heuristic Quasi-static: –Static because it doesn’t depend on variables’ states –Not static because it gradually changes Variables ranked by appearance in recent conflicts –Fast data structures to find unassigned variable with the highest ranking in O(1) time –Why do all this? Even a single linear pass through variables on each decision would dominate run-time Also, seems to work fairly well in terms of # decisions

36 Decision Heuristic Implementation –Keep all unassigned vars sorted by heuristic value Can quickly (O(1)) pick variable at decision time Need to insert and remove variables as they get assigned or unassigned This is O(log(n)) per assignment, but only needs to be done for vars that change state between decisions –Important to do *nothing* during BCP Make sure data structure supports quickly visiting vars that have changed state since last decision

37 Interplay of BCP and Decision This is only an Intuitive description … –Reality depends heavily on specific instance Take some variable ranking (from the decision engine) –Assume several decisions are made Say 2=T, 7=F, 9=T, 1=T (and any implications thereof) –Then a conflict is encountered that forces 2=F The next decisions may still be 7=F, 9=T, 1=T ! But the BCP engine has recently processed these assignments … so these variables are unlikely to still be watched. Thus, the BCP engine *inherently does a differential update.* –And the Decision heuristic makes differential changes more likely to occur in practice.

38 Clause Deletion Keeps memory usage under 2GB Relevance based is (almost) free: –Simply ‘schedule’ clauses for deletion when they are added, based on the DLs on literals in the clause –Clauses get lazily deleted during unwind Marked as ‘unused’ and ignored in BCP –Memory recovered with a monolithic database compaction operation at infrequent intervals Somewhat of an ugly thing to do, but better than dynamic allocation.


Download ppt "Dynamic Backtracking for SAT Presented by: Phil Oertel April 6, 2004."

Similar presentations


Ads by Google