Download presentation
Presentation is loading. Please wait.
Published byStella Whitehair Modified over 9 years ago
1
On Solving Presburger and Linear Arithmetic with SAT Ofer Strichman Carnegie Mellon University
2
2 The decision problem A Boolean combination of predicates of the form Disjunctive linear arithmetic are constants Quantifier-free Presburger formulas are rational constants
3
3 Some Known Techniques Linear Arithmetic (conjunctions only) Interior point method (Khachian 1979, Karmarkar 1984) (P) Simplex (Dantzig, 1949) (EXP) Fourier-Motzkin elimination (2EXP) Loop residue (Shostak 1984) (2EXP) … Almost all theorem provers use Fourier-Motzkin elimination (PVS, ICS, SVC, IMPS, …)
4
4 Fourier-Motzkin elimination - example (1) x 1 – x 2 · 0 (2) x 1 – x 3 · 0 (3) -x 1 + 2x 3 + x 2 · 0 (4) -x 3 · -1 Eliminate x 1 Eliminate x 2 Eliminate x 3 (5) 2x 3 · 0 (from 1 and 3) (6) x 2 + x 3 · 0 (from 2 and 3) (7) 0 · -1 (from 4 and 5) Contradiction (the system is unsatisfiable)! Elimination order: x 1, x 2, x 3
5
5 Fourier-Motzkin elimination (1/2) A system of conjoined linear inequalities m constraints n variables
6
6 Fourier-Motzkin elimination (2/2) Sort constraints: For all i s.t. a i,n > 0 For all i s.t. a i,n < 0 For all I s.t. a i,n = 0 Each elimination adds (m 1 ¢ m 2 – m 1 – m 2 ) constraints m1m1 m2m2 Eliminating x n Generate a constraint from each pair in the first two sets.
7
7 Complexity of Fourier-Motzkin Worst-case complexity: Q: Is there an alternative to case-splitting ? So why is it so popular in verification? Because it is efficient for small problems. In verification, most inequalities systems are small. In verification we typically solve a large number of small linear inequalities systems. The bottleneck: case splitting.
8
8 Boolean Fourier-Motzkin (BFM) (1/2) x 1 – x 2 · 0 x 1 – x 3 · 0 (-x 1 + 2x 3 + x 2 · 0 -x 3 · -1) (x 1 – x 2 > 0) x 1 – x 3 · 0 (-x 1 + 2x 3 + x 2 > 0 1 > x 3 ) 1.Normalize formula: Transform to NNF Eliminate negations by reversing inequality signs
9
9 : x 1 - x 2 · 0 x 1 - x 3 · 0 (-x 1 + 2x 3 + x 2 · 0 -x 3 · -1) 2. Encode: Boolean Fourier-Motzkin (BFM) (2/2) 3. Perform FM on the conjunction of all predicates: ’: e 1 e 2 ( e 3 e 4 ) x 1 – x 2 · 0 -x 1 + 2x 3 + x 2 · 0 2x 3 · 0 e1e3e5e1e3e5 e 1 e 3 e 5 Add new constraints to ’
10
10 BFM: example e 1 x 1 – x 2 · 0 e 2 x 1 – x 3 · 0 e 3 -x 1 + 2x 3 + x 2 · 0 e 4 -x 3 · -1 e 1 e 2 (e 3 e 4 ) e 5 2x 3 · 0 e 6 x 2 + x 3 · 0 e1 e3 e5e1 e3 e5 e2 e3 e6e2 e3 e6 False 0 · -1 e 4 e 5 false ’ is satisfiable
11
11 Problem: redundant constraints : ( x 1 < x 2 – 3 (x 2 < x 3 –1 x 3 < x 1 +1)) Case splitting x 1 < x 2 – 3 x 2 < x 3 –1 x 1 < x 2 – 3 x 3 < x 1 +1 No constraints x 1 < x 2 – 3 x 2 < x 3 – 1 x 3 < x 1 +1... constraints
12
12 Let d be the DNF representation of Solution: Conjunctions Matrices (1/3) We only need to consider pairs of constraints that are in one of the clauses of d Deriving d is exponential. But – Knowing whether a given set of constraints share a clause in d is polynomial, using Conjunctions Matrices
13
13 Conjunctions Matrices (2/3) Let be a formula in NNF. Let l i and l j be two literals in . The joining operand of l i and l j is the lowest joint parent of l i and l j in the parse tree of . :l 0 (l 1 (l 2 l 3 )) l0l0 l1l1 l2l2 l3l3 l 0 l 1 l 2 l 3 l0l1l2l3l0l1l2l3 1 1 1 1 0 0 1 0 1 Conjunctions Matrix M :M :
14
14 Claim 1: A set of literals L={l 0,l 1 …l n } share a clause in d if and only if for all l i,l j L, i j, M [l i,l j ] =1. Conjunctions Matrices (3/3) We can now consider only pairs of constraints that their corresponding entry in M is equal to 1
15
15 BFM: example e 1 x 1 – x 2 · 0 e 2 x 1 – x 3 · 0 e 3 -x 1 + 2x 3 + x 2 · 0 e 4 -x 3 · -1 e 1 e 2 (e 3 e 4 ) e 1 e 2 e 3 e 4 e1e2e3e4e1e2e3e4 1 1 1 1 1 0 e 5 2x 3 · 0 e 6 x 2 + x 3 · 0 e1 e3 e5e1 e3 e5 e2 e3 e6e2 e3 e6 e 1 e 2 e 3 e 4 e 5 e 6 e1e2e3e4e5e6e1e2e3e4e5e6 1 1 1 1 1 1 1 1 1 0 1 1 0 0 1 Saved a constraint from e 4 and e 5
16
16 Complexity of the reduction Claim 3: Typically, c1 << c2 The Reason: In DNF, the same pair of constraints can appear many times. With BFM, it will only be solved once. Theoretically, there can still be constraints. Let c1 denote the number of generated constraints with BFM combined with conjunctions matrices. Let c2 denote the total number of constraints generated with case-splitting. Claim 2: c1 · c2.
17
17 The reason is: All the clauses that we add are Horn clauses. Therefore, for a given assignment to the original encoding of , all the constraints are implied in linear time. Complexity of solving the SAT instance Claim 4: Complexity of solving the resulting SAT instance is bounded by where m is the number of predicates in Overall complexity : Reduction SAT
18
18 Experimental results (1/2) Reduction time of ‘2-CNF style’ random instances. Solving the instances with Chaff – a few seconds each. With case-splitting only the 10x10 instance could be solved (~600 sec.)
19
19 Experimental results (2/2) Seven Hardware designs with equalities and inequalities All seven solved with BFM in a few seconds Five solved with ICS in a few seconds. The other two could not be solved. The reason (?): ICS has a more efficient implementation of Fourier-Motzkin compared to PORTA On the other hand… Standard ICS benchmarks (A conjunction of inequalities) Some could not be solved with BFM …while ICS solves all of them in a few seconds.
20
20 Some Known Techniques Quantifier-free Presburger formulas Branch and Bound SUP-INF (Bledsoe 1974) Omega Test (Pugh 1991) …
21
21 Quantifier-free Presburger formulas Classical Fourier-Motzkin method finds real solutions x y Geometrically, a system of real inequalities define a convex polyhedron. Each elimination step projects the data to a lower dimension. Geometrically, this means it finds the ‘shadow’ of the polyhedron.
22
22 The Omega Test (1/3) Pugh (1993) The shadow of constraints over integers is not convex. x y Satisfiability of the real shadow does not imply satisfiability of the higher dimension. A partial solution: Consider only the areas above which the system is at least one unit ‘thick’. This is the dark shadow. If there is an integral point in the dark shadow, there is also an integral point above it.
23
23 The Omega test (2/3) Pugh (1993) If there is no solution to the real shadow – is unsatisfiable. Splinters If there is an integral solution to the dark shadow – is satisfiable. Otherwise (‘the omega nightmare’) – check a small set of planes (‘splinters’).
24
24 The Omega test (3/3) Pugh (1993) Input: 9 x n. C x n is an integer variable C is a conjunction of inequalities In each elimination step: The output formula does not contain x n Output: C’ Ç 9 integer x n. S C’ is the dark shadow (a formula without x n ) S contains the splinters
25
25 Boolean Omega Test 1.Normalize (eliminate all negations) 2.Encode each predicate with a Boolean variable 3.Solve the conjoined list of constraints with the Omega-test: Add new constraints to ’ inequality #1 inequality #2 inequality #3 Ç inequality #4 e1e2e3Çe4e1e2e3Çe4 e 1 Æ e 2 ! e 3 Ç e 4
26
26 Related work A reduction to SAT is not the only way …
27
27 The CVC approach (Stump, Barrett, Dill. CAV2002) Encode each predicate with a Boolean variable. Solve SAT instance. Check if assignments to encoded predicates is consistent (using e.g. Fourier-Motzkin). If consistent – return SAT. Otherwise – backtrack.
28
28 Difference Decision Diagrams (Møller, Lichtenberg, Andersen, Hulgaard, 1999) Similar to OBDDs, but the nodes are ‘separation predicates’ Each path is checked for consistency, using ‘Bellman-Ford’ Worst case – an exponential no. of such paths x 1 – x 3 < 0 x 2 - x 3 0 x 2 -x 1 < 0 10 1 ‘Path – reduce’ Can be easily adapted to disjunctive linear arithmetic
29
29 Finite domain instantiation Disjunctive linear arithmetic and its sub-theories enjoy the ‘small model property’. A known sufficient domain for equality logic: 1..n (where n is the number of variables). For this logic, it is possible to compute a significantly smaller domain for each variable (Pnueli et al., 1999). The algorithm is a graph-based analysis of the formula structure. Potentially can be extended to linear arithmetic.
30
30 Reduction to SAT is not the only way… Instead of giving the range [1..11], analyze connectivity: x1x1 x2x2 y1y1 y2y2 g1g1 g2g2 zu1u1 f1f1 f2f2 u2u2 Further analysis will result in a state-space of 4 Range of all var’s: 1..11 State-space: 11 11 x 1, y 1, x 2, y 2 :{0-1} u 1, f 1, f 2, u 2 : {0-3} g 1, g 2, z : {0-2} State-space: ~10 5 Q: Can this approach be extended to Linear Arithmetic?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.