Download presentation
Presentation is loading. Please wait.
Published byElfreda Hodge Modified over 6 years ago
1
Predicate Learning and Selective Theory Deduction for Solving Difference Logic
Chao Wang, Aarti Gupta, Malay Ganai NEC Laboratories America Princeton, New Jersey, USA August 21, 2006 Presentation-only: for more info. please check [Wang et al LPAR’05] and [Wang et al DAC’06]
2
Difference Logic Logic to model systems at the “word-level”
Subset of quantifier-free first order logic Boolean connectives + predicates like (x – y ≤ c) Formal verification applications Pipelined processors, timed systems, embedded software e.g., back-end of the UCLID Verifier Existing solvers Eager approach [Strichman et al. 02], [Talupur et al. 04], UCLID Lazy approach TSAT++, MathSAT, DPLL(T), Saten, SLICE, Yices, HTP, … Hybrid approach [Seshia et al. 03], UCLID, SD-SAT
3
Our contribution Lessons learned from previous works What’s new?
Incremental conflict detection and zero-cost theory backtracking [Wang et al. LPAR’05] Exhaustive theory deduction [Nieuwenhuis & Oliveras CAV’05] Eager chordal transitivity constraints [Strichman et al. FMCAD’02] What’s new? Incremental conflict detection PLUS selective theory deduction with little additional cost Dynamic predicate learning to combat exponential blow-up
4
Outline Preliminaries Selective theory (implication) deduction
Dynamic predicate learning Experiments Conclusions
5
Preliminaries Difference logic formula Difference predicates
Boolean skeleton Constraint graph for assignment (A,¬B,C,D) A: ( x – y ≤ 2 ), B: ( z – x ≤ -7 ) C: ( y - z ≤ 3 ), D: ( w - y ≤ 10 ) x y w z A:2 D:10 C:3 ¬B:-7 A: ( x – y ≤ 2 ) ¬ B: ( z – x ≤ -7 ) C: ( y - z ≤ 3 ) D: ( w - y ≤ 10 )
6
Theory conflict: infeasible Boolean assignment
Negative weighted cycle Theory conflict Theory conflict Lemma or blocking clause Boolean conflict Lemma learned: (¬A + B + ¬C) A:2 C:3 ¬B:-7 x y w z A:2 D:10 C:3 ¬B:-7 A: ( x – y ≤ 2 ) ¬ B: ( z – x ≤ -7 ) C: ( y - z ≤ 3 ) D: ( w - y ≤ 10 ) Conflicting clause: (false + false + false)
7
Theory implication: implied Boolean assignment
If adding an edge creates a negative cycle negated edge is implied Theory implication var assignment Boolean implication (BCP) x y w z A:2 D:10 C:3 ¬B:-7 A: ( x – y ≤ 2 ) ¬ B: ( z – x ≤ -7 ) C: ( y - z ≤ 3 ) D: ( w - y ≤ 10 ) Theory implication: A ^ ¬B → (¬C) C:3 Implied Boolean assignment trigger a series of BCP
8
Negative cycle detection
Called repeatedly to solve many similar subproblems For conflict detection (incremental, efficient) For implication deduction (often expensive) Incremental detection versus exhaustive deduction SLICE: Incremental cycle detection -- O(n log n) DPLL(T): Exhaustive theory deduction -- O(n * m) SLICE [LPAR’05] DPLL(T) – Barcelogic [CAV’05] Conflict detection Incremental NO Implication deduction Exhaustive
9
Points above the diagonals Wins for SLICE solver
Data from [LPAR’05]: Comparing SLICE solver (SMT benchmarks repository,as of ) vs. UCLID vs. ICS 2.0 vs. MathSAT vs. DPLL(T) – Barcelogic vs. DPLL(T)–B (linear scale) vs. TSAT++ Points above the diagonals Wins for SLICE solver
10
From the previous results
We have learned that Incremental conflict detection more scalable Exhaustive theory deduction also helpful Can we combine their relative strengths? Our new solution Incremental conflict detection (SLICE) Zero-cost theory backtracking (SLICE) PLUS selective theory deduction with O(n) cost
11
Outline Preliminaries Selective theory (implication) deduction
Dynamic predicate learning Experiments Conclusions
12
while (implications.empty()) { set_var_value(implications.pop());
Constraint Propagation Theory Constraint Propagation Deduce() { while (implications.empty()) { set_var_value(implications.pop()); if (detect_conflict()) return CONFLICT; add_new_implications(); if ( ready_for_theory_propagation() ) { if (theory_detect_conflict()) theory_add_new_implications(); } Boolean CP (BCP)
13
Incremental conflict detection
[Ramalingam 1999] [Bozzano et al. 2005] [Cotton 2005] [Wang et al. LPAR’05] Incremental conflict detection Relax Edge (u,v): if ( d[v] > d[u]+w[u,v] ) { d[v] = d[u]+w[u,v]; pi[v] = u } (y,x) d[x]=-2 pi[x]=y X -2 3 X -4 (z,y) d[y]=-4 pi[y]=z 2 (z,y) CONFLICT !!! x y X 6 (y,w) d[w]=6 pi[w]=y -7 10 (x,z) d[z]=-9 pi[z]=x X -9 The basic operation is to relax an edge. Here we attach a cost value d[v] to each node. For example, 0, 0, -7, and 0 are the cost values. If the destination cost is larger than the source cost PLUS the edge weight, we change the destination cost. At the same time, we record in pi[v] that the change to d[v] is due to the edge from u. Every time we add a new edge, we relax, and then another relax, and then another relax. If creates a negative cycle, then the new edge will be relaxed again. This is how we detect conflict. This algorithm is incremental and therefore is significantly cheaper. Also note that removing an edge (due to backtracking), does not trigger any relax operations. That’s what we call the zero-cost backtracking. w z -7 Add an edge relax, relax, relax, … Remove an edge do nothing (zero-cost backtracking in SLICE)
14
Selective theory deduction
Post(x) = {x, z, … } Pre(y) = {y, w, … } z x y w through relax Pi[y] = w d[z] - d[y] <= w[y,z] Edge (y,z) is an implied assignment How do we add the capability of deducing theory implications? By the time we finish incremntal cycle detection, we will have the following two sets: Post(x) and Pre(y), readily available. Post(x) has all the nodes affected by the addition of the new edge. Pre(y) has all the nodes responsible for the current value of y, and we can find them by following the Pi[v] fields. If there is an edge from y to z that satisfies the condition, then it’s an implication. We can also consider the pair-wise edges from pre(y) to post(x). Therefore, we have two different choices: FWD & Both. This is significantly cheaper than exhaustive theory deduction. In Bacelogic, you actually need to call Bellman-Ford algorithm twice to get something similar to Post(x) and Pre(y). FWD: Pre(y) = {y} Post(x) = {x,z,…} Both: Pre(y) = {y,w,…} Post(x) = {x,z,…} Significantly cheaper than exhaustive theory deduction
15
Outline Preliminaries Selective theory (implication) deduction
Dynamic predicate learning Experiments Conclusions
16
Diamonds: with O(2^n) negative cycles
-1 Observations: With existing predicates (e1,e2,…) exponential number of lemmas Add new predicates (E1,E2,E3) and dummies (E1+!E1) & (E2+!E2) & … almost linear number of lemmas Previous eager chordal transitivity used by [Strichmann et al. FMCAD’02] The idea comes from the analysis of the diamonds example. The example has an exponential number of negative cycles. It was used to show that a lazy solver based on the addition of negative cycles will get into trouble. As far as we know, all existing lazy solvers actually have exponential run-time performance on this example. However, if we are allowed to add a few new predicates, basically short-cuts in the graph, then even if we use a lazy solver, we may be able to reduce the number of negative cycles down to linear.
17
Add new predicates to reduce lemmas
x z y w E3: x – y <= (d[x] - d[y]) Of couse, the short-cuts have to be carefully selected ones. If we blindly adding all possible short-cuts, the worst case is still exponential. What we have proposed is a heuristic algorithm to dynamically and selectively adding new predicates. For example, if variables x and y appear very frequently in the added negative cycles, and they are themselves reconverging points of the graph, then we may decided to add a short-cut among them. By doing this, we can use a very limited number of new predicates to help avoiding the exponential blow-up. Heuristics to choose GOOD predicates (short-cuts) Nodes that show up frequently in negative cycles Nodes that are re-convergence points of the graph (Conceptually) adding a dummy constraint (E3 + ! E3) Predicates: E1: x – y < 5 E2: y – x < 5 Lemma: ( ! E1 + ! E2 )
18
Experiments with SLICE+
Implemented upon SLICE i.e., [Wang et al. LPAR’05] Controlled experiments Flexible theory propagation invocation Per predicate assignment, per BCP, or per full assignment Selective theory deduction No deduction, Fwd-only, or Both-directions Dynamic predicate learning With, or Without
19
When to call the theory solver?
On the DTP benchmark suite per BCP versus per predicate assignment per BCP versus per full assignment Points above the diagonals Wins for per BCP
20
Comparing theory deduction schemes
On the DTP benchmark suite Fwd-only deduction vs. no deduction total 660 seconds Both-directions vs. no deduction total 1138 seconds Points above the diagonals Wins for no deduction
21
Comparing dynamic predicate learning
On the diamonds benchmark suite
22
Comparing dynamic predicate learning
On the DTP benchmark suite Dyn. pred. learning vs. No pred. learning Points above the diagonals Wins for No pred. learning
23
Lessons learned Timing to invoke theory solver
“after every BCP finishes” gives the best performance Selective implication deduction Little added cost, but improves the performance significantly Dynamic predicate learning Reduces the exponential blow-up in certain examples In the spirit of “predicate abstraction” Questions ?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.