Efficient Craig Interpolation for Linear Diophantine (Dis)Equations & Linear Modular Equations Jain, Clarke & Grumberg CAV08.

Slides:



Advertisements
Similar presentations
1 Conjunctions of Queries. 2 Conjunctive Queries A conjunctive query is a single Datalog rule with only non-negated atoms in the body. (Note: No negated.
Advertisements

Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Information and Coding Theory Finite fields. Juris Viksna, 2015.
1.5 Elementary Matrices and a Method for Finding
Computability and Complexity 9-1 Computability and Complexity Andrei Bulatov Logic Reminder (Cnt’d)
1 Model Checking, Abstraction- Refinement, and Their Implementation Based on slides by: Orna Grumberg Presented by: Yael Meller June 2008.
Interpolants [Craig 1957] G(y,z) F(x,y)
Chapter 2 Basic Linear Algebra
Matrices and Systems of Equations
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
1 © 2012 Pearson Education, Inc. Matrix Algebra THE INVERSE OF A MATRIX.
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
4. There once lived a king, named Hagar, who had thousands of sons and daughters, still giving more births to his wives. One of his serious problems was.
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
Linear Algebra With Applications by Otto Bretscher. Page The Determinant of any diagonal nxn matrix is the product of its diagonal entries. True.
SAT Solver Math Foundations of Computer Science. 2 Boolean Expressions  A Boolean expression is a Boolean function  Any Boolean function can be written.
Multivariate Linear Systems and Row Operations.
Integration Techniques, L’Hôpital’s Rule, and Improper Integrals Copyright © Cengage Learning. All rights reserved.
Chapter 10 Review: Matrix Algebra
Decision Procedures An Algorithmic Point of View
Scientific Computing Linear Systems – LU Factorization.
1 資訊科學數學 13 : Solutions of Linear Systems 陳光琦助理教授 (Kuang-Chi Chen)
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear Algebra and Complexity Chris Dickson CAS Advanced Topics in Combinatorial Optimization McMaster University, January 23, 2006.
 Row and Reduced Row Echelon  Elementary Matrices.
Square n-by-n Matrix.
Copyright © 2011 Pearson, Inc. 7.3 Multivariate Linear Systems and Row Operations.
Chapter 2 Determinants. The Determinant Function –The 2  2 matrix is invertible if ad-bc  0. The expression ad- bc occurs so frequently that it has.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Systems of Linear Equation and Matrices
Matrix Algebra. Quick Review Quick Review Solutions.
Chap. 2 Matrices 2.1 Operations with Matrices
Diophantine Approximation and Basis Reduction
More on Inverse. Last Week Review Matrix – Rule of addition – Rule of multiplication – Transpose – Main Diagonal – Dot Product Block Multiplication Matrix.
Linear Algebra Chapter 4 Vector Spaces.
4.5 Solving Systems using Matrix Equations and Inverses.
C&O 355 Mathematical Programming Fall 2010 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
SAT and SMT solvers Ayrat Khalimov (based on Georg Hofferek‘s slides) AKDV 2014.
Matrix. REVIEW LAST LECTURE Keyword Parametric form Augmented Matrix Elementary Operation Gaussian Elimination Row Echelon form Reduced Row Echelon form.
Linear Programming System of Linear Inequalities  The solution set of LP is described by Ax  b. Gauss showed how to solve a system of linear.
7.4. Computations of Invariant factors. Let A be nxn matrix with entries in F. Goal: Find a method to compute the invariant factors p 1,…,p r. Suppose.
8.1 Matrices & Systems of Equations
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Chapter 3 Determinants Linear Algebra. Ch03_2 3.1 Introduction to Determinants Definition The determinant of a 2  2 matrix A is denoted |A| and is given.
Chinese Remainder Theorem Dec 29 Picture from ………………………
4.5 Quadratic Equations Zero of the Function- a value where f(x) = 0 and the graph of the function intersects the x-axis Zero Product Property- for all.
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
MA/CSSE 473 Day 08 Extended Euclid's Algorithm Modular Division Fermat's little theorem.
4.7 Solving Systems using Matrix Equations and Inverses
Linear Algebra Chapter 2 Matrices.
3.8B Solving Systems using Matrix Equations and Inverses.
1 1.4 Linear Equations in Linear Algebra THE MATRIX EQUATION © 2016 Pearson Education, Ltd.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Chapter 1. Linear equations Review of matrix theory Fields System of linear equations Row-reduced echelon form Invertible matrices.
College Algebra Chapter 6 Matrices and Determinants and Applications
Mathematics-I J.Baskar Babujee Department of Mathematics
12-4: Matrix Methods for Square Systems
MAT 322: LINEAR ALGEBRA.
Linear Algebra Lecture 19.
Computability and Complexity
Chapter 5 Systems and Matricies. Chapter 5 Systems and Matricies.
Propositional Calculus: Boolean Algebra and Simplification
Complexity 6-1 The Class P Complexity Andrei Bulatov.
Elementary Matrix Methid For find Inverse
Numerical Analysis Lecture10.
Chapter 2. Simplex method
1.11 Use Inverse Matrices to Solve Linear Systems
Integration Techniques, L’Hôpital’s Rule, and Improper Integrals
Presentation transcript:

Efficient Craig Interpolation for Linear Diophantine (Dis)Equations & Linear Modular Equations Jain, Clarke & Grumberg CAV08

We saw (in Yael’s talk): Interpolants are used in abstraction refinement for finding a set of predicates in order to rule out spurious counterexamples 1 x:=ctr 2 3 ... 5 ctr:=ctr+1 y:=ctr 4 x=m ERR … x≠m y=m+1 y≠m+1 c1x1+c2x2+… + cnxn = (≠) c0 These predicates are of the form of linear (dis)equations:

c1x1+c2x2+… + cnxn ≡ c0(mod m) We first discuss equations of the types: c1x1+c2x2+… + cnxn = c0 Rational Integral variable a Linear Diophantine Equation (LDE) Rational c1x1+c2x2+… + cnxn ≡ c0(mod m) a Linear Modular Equation (LME)

A C X A C X = ≡m A system of LDEs can be written as: A system of LMEs can be written as: A X ≡m C

A system of LDEs as a conjunction: X C1 C2 = (A1X = C1) ^ (A2X = C2)

(A1X = C1) ^ (A2X = C2) == false A system of LDEs CX=B is unsatisfiable if it has no integral solution for X Example: 1 1 0 1 -1 0 0 2 2 x y z = 1 3 x+y=1 x-y=1 We say that A1 A2 (A1X = C1) ^ (A2X = C2) == false X C1 C2 y=0 2*0+2z=3 z=2.5

We call R a proof of unsatisfiability for AX=B Theorem: AX=B == false iff there exists a rational vector R such that: RA is integral RB is not an integer We call R a proof of unsatisfiability for AX=B = 1 1 -2 0 1 0 -2 x y z Example: AX=B := RA = 1 -1 1 RB = 0.5 AX=B == false 0.5 -0.5 R :=

(AX=C) ^ (A2X = C2) == false An interpolant for is a system AX=C such that: For instance, A1X=C1 UA1X=UC1 for a rational vector U (A1X = C1) (AX=C) Every integral solution for A1X=C1 is also an integral solution for AX=C X has no integral solution in (AX=C) and (A2X = C2) (AX=C) ^ (A2X = C2) == false Only to xi who have coefficients ≠ 0 in A1 and in A2 AX=C refers only to xi common to A1, A2.

^ ^ Example: = = = 1 1 0 1 -1 0 x y z 0.5 -0.5 1 1 1 0 1 -1 0 x y z 1 1 1 0 1 -1 0 x y z 0.5 -0.5 1 1 1 0 1 -1 0 x y z 1 0 2 2 x y z = 3 = == false An interpolant = 0 1 0 x y z 0 2 2 ^ 3 == false

^ = An unsatisfiable system of LDEs does not always have an LDE as an interpolant. Example: 1 0 -2 x y z = 1 -2 0 ^ 1 == false X is even X is odd proof: Lemma: AX=B implies CX=D iff AX=B is unsatisfiable or there exists a vector R such that C = RA and D=RB

There always exists an LME If the system has an LDE as an interpolant then it is of the form: r(x-2y)=0 It can only contain x as a common variable r=0 But 0=0 is not an interpolant: (x-2z)=1^(0=0) is satisfiable However, there exists an LME as an interpolant: x y z 1 0 0 ≡2 There always exists an LME as an interpolant

= An algorithm for finding interpolants Let AX=A’ ^ BX=B’ == false Let R = [R1 R2] be a proof of unsatisfiability: = A B X A’ B’ R1 R2 R1AX+R2BX R1A’+R2B’ R1A+R2B is integral R1A’+R2B’ is not an integer The LDE R1AX=R1A’ is a partial interpolant for the system R1AX=R1A’ == variables occuring only in AX=A’ variables occuring in both AX=A’ and in BX=B’

R1AX+R2BX = R1A’+R2B’ R1AX=R1A’ == Lemma: ai is an integer An algorithm for finding interpolants R1AX+R2BX = R1A’+R2B’ The LDE R1AX=R1A’ is a partial interpolant for the system R1AX=R1A’ == Lemma: ai is an integer variables occuring only in AX=A’ variables occuring in both AX=A’ and in BX=B’ These variables do not appear in R2BX, and R1AX+R2BX is integral

The partial interpolant R1AX=R1A’ satisfies: An algorithm for finding interpolants Lemma: The partial interpolant R1AX=R1A’ satisfies: AX=A’ R1AX=R1A’ 1. 2. (R1AX=R1A’) ^ (BX=B’) == false Proof: (R1AX=R1A’) ^ (BX=B’) R1A B X = R1A’ B’ 1 R1A’ B’ = R1A’+R2B’ R2 R1A B = R1A+R2B 1 R2 integral not an integer [1 R2] is a proof of unsatisfiability

If all ai=0, then the partial interpolant An algorithm for finding interpolants R1AX=R1A’ == If all ai=0, then the partial interpolant is also an interpolant for AX=A’ ^ BX=B’: We saw the first two conditions hold. In case ai=0 , then R1AX=R1A’ is over variables common to AX=A’ and to BX=B’.

^ Example: = = = = 0 2 2 x y z 3 == false 1 1 0 1 -1 0 x y z 1 1 1 0 An algorithm for finding interpolants ^ Example: 0 2 2 x y z = 3 == false 1 1 0 1 -1 0 x y z 1 = 1 1 0 1 -1 0 0 2 2 = 1 3 x y z A proof of unsatisfiability: 0.5 -0.5 0.5 The partial interpolant: 0.5 -0.5 1 1 0 1 -1 0 = x y z 1 0 1 0 x y z = Only over y , common to both LDEs. the partial interpolant is also an interpolant.

flashback: This system does not have an LDE interpolant An algorithm for finding interpolants ^ Doesn’t always work: 1 -2 0 x y z = 1 0 -2 x y z = 1 == false 1 -2 0 1 0 -2 = 1 x y z X is even X is odd A proof of unsatisfiability: 0.5 0.5 flashback: This system does not have an LDE interpolant The partial interpolant: 0.5 1 -2 0 = x y z 0.5 -1 0 x y z = Only over x and y , not common to both LDEs. the partial interpolant is not an interpolant.

ai is an integer α is an integer An algorithm for finding interpolants Obtaining an LME interpolant By removing variables not common to AX=A’ and BX=B’ The partial interpolant: α := gcd of ai ai is an integer α is an integer β := integer such that β|α Then is an interpolant

^ is an interpolant Proof: β|α, α|ai xi=gi An algorithm for finding interpolants is an interpolant Proof: 1. AX=A’ R1AX=R1A’ R1AX ≡β R1A’ β|α, α|ai ^ 2. Suppose that BX=B’ has an integral solution xi=gi BX=B’ R2BX=R2B’ xi=gi is a solution for R2BX=R2B’ R2BG=R2B’ ==

^ + An interpolant! An algorithm for finding interpolants R2BG=R2B’ = R1AG ≡β R1A’ not an integer R1A’+R2B’ an integer R1A+R2B is integral ^ BX=B’ == false A contradiction 3. The expression is over variables common to AX=A’ and BX=B An interpolant!

An algorithm for finding interpolants (summary): Given an unsatisfiable system of LDEs AX=A’ and BX=B’: How? still to come... 1. compute a proof of unsatisfiability [R1 R2] 2. compute the partial interpolant R1AX=R1A’ 3. if R1AX=R1A’ is not only over VAB : 3.1 compute the gcd α of coefficients of xi’s in VA/B 3.2 compute β that divides α 3.3 return else return R1AX=R1A’

We call R a proof of unsatisfiability for AX ≡m B Interpolants for LMEs c1x1+c2x2+… + cnxn ≡ c0(mod m) A X ≡m C Theorem: AX ≡m B == false iff there exists a rational vector R such that: RA is integral mR is integral RB is not an integer We call R a proof of unsatisfiability for AX ≡m B AX ≡m B == false RA = -1 0 RB = -3/2 mR = 2 -4 -1 ≡8 4 2 2 2 1 4 0 x y Example: AX ≡m B := 1/4 -1/2 -1/8 R :=

The two equations are equi-satisfiable Interpolants for LMEs Proof: An LME CX≡m D: c11 …… c1n c21 …… c2n cn1 …… ctn x1 x2 xn d1 d2 dt ≡m The two equations are equi-satisfiable For each equation: ci1x1+ci2x2+ … + cinxn ≡m di Add a new variable: ci1x1+ci2x2+ … + cinxn + mvi = di The new system C’Z=D: c11 …… c1n m 0 … 0 c21 …… c2n 0 m … 0 cn1 …… ctn 0 0 … m x1 . xn v1 vt = d1 d2 dt

r1 r2…… rt c11 …… c1n m 0 … 0 c21 …… c2n 0 m … 0 cn1 …… ctn 0 0 … m Interpolants for LMEs CX ≡m D has an integral solution iff C’Z=D has one. CX ≡m D has no integral solution iff C’Z=D has no integral solution iff There exists a vector R such that RC’ is integral and RD is not an integer Let R=[r1 r2 … rt] RC’= c11 …… c1n m 0 … 0 c21 …… c2n 0 m … 0 cn1 …… ctn 0 0 … m r1 r2…… rt Integral =[RC[1] RC[2] … RC[n] mr1 mr2 …. mrt] = [RC mR]

Let (AX ≡m A’) ^ (BX ≡m B’) == false Interpolants for LMEs Let (AX ≡m A’) ^ (BX ≡m B’) == false Let R = [R1 R2] be a proof of unsatisfiability: R1AX = Let S={ai | ai ≠0} mR1 = [d1 d2 d3 ... dk] Let T={di | di ≠0} If T=Φ interpolant: 0≡m0 Otherwise: Let α = gcd S U T Let β := integer such that β|α (m/β R1)AX ≡m (m/β R1)A’ is an interpolant

(AX ≡m A’) ^ (BX ≡m B’) == false Interpolants for LMEs Proof: (AX ≡m A’) ^ (BX ≡m B’) == false Let R = [R1 R2] be a proof of unsatisfiability: A B X A’ B’ R1 R2 ≡m R1A+ R2B is integral The coefficients of xi only in A are integral mR = [mR1 mR2] is integral mR1 is integral R1A’+ R2B’ is not an integer

R2B is integral, R2B’ is not an integer Interpolants for LMEs R1AX = Let S={ai | ai ≠0} mR1 = [d1 d2 d3 ... dk] Let T={di | di ≠0} If T=Φ R1 = 0 R2B is integral, R2B’ is not an integer interpolant: 0≡m0 (== true) (BX ≡m B’) == false If T≠Φ: S and T are integral α := gcd S U T is an integer

β := integer such that β|α Interpolants for LMEs β := integer such that β|α need to prove: (m/β R1)AX ≡m (m/β R1)A’ is an interpolant Lemma: For every integral vector U the system CX ≡m D implies UCX ≡m UD 1. mR1 is integral. β divides every element of mR1. 1/β mR1 = m/β R1 is integral (mark it U) AX ≡m A’ implies (m/βR1)AX ≡m (m/βR1)A’

[β/m,R2] is a proof of unsatisfiability: Interpolants for LMEs 2. UAX≡mUA’ ^ BX ≡m B’ UA B X UA’ B’ ≡m [β/m,R2] is a proof of unsatisfiability: UA B β/m R2 = β/m m/βR1A+R2B = R1A+R2B Integral m[β/m,R2] = [β,mR2] Integer Integral not an Integer UA’ B’ β/m R2 = β/m m/βR1A’+R2B’ = R1A’+R2B’ UAX≡mUA’ ^ BX ≡m B’ == false

3. (m/β R1)AX ≡m (m/β R1)A’ is over common variables: Interpolants for LMEs 3. (m/β R1)AX ≡m (m/β R1)A’ is over common variables: (m/β R1)AX (m/β R1)A’ β divides ai’s ai/β is integral

^ Example: == false x y x y x y R1AX = ¼ -1/2 -1/2 0 mR1 = S = Φ Interpolants for LMEs ^ Example: == false 2 2 2 1 x y ≡8 4 4 0 x y ≡8 4 ≡8 4 2 2 2 1 4 0 x y A proof of unsatisfiability: 1/4 -1/2 -1/8 R1AX = ¼ -1/2 2 2 2 1 x y = -1/2 0 = -1/2x mR1 = 2 -4 S = Φ T = {2, -4} α = 2 β = 2 or β = 1 -4 0 x y ≡8 -8 == 2 -4 2 2 2 1 x y ≡8 1 4 for β = 1: 2 -4 2 2 2 1 x y ≡8 4 for β = 2: ½ -2 0 x y ≡8 -4 ==

What if the moduli is different? Interpolants for LMEs What if the moduli is different? (AX ≡m1 A’) ^ (BX ≡m2 B’) == false m=lcm(m1,m2) standard moduli operations (AX ≡m1 A’) ^ (BX ≡m2 B’) ≡ (m2AX ≡m m2A’) ^ (m1BX ≡m m1B’) For more than two formulas, use m=lcm(m1,m2, m3,…,), For the i’th formula use m/mi

E 0 Obtaining Proofs of Unsatisfiability Hermite Normal Form If AX=B has no rational solution, it has no integral solution. First, use Gaussian elimination Hermite Normal Form Every full row rank matrix A[mxn] can be represented as: E 0 mxm mx(n-m) Lower triangular Invertible All entries non-negative Maximal element lies on the diagonal There exists a unimodular (invertible, integral, closed under product and inversion) matrix U such that AU=[E 0] The HNF form can be obtained by using the three basic column operations on A

Lemma: AX=B has no integral solution iff E-1B is not integral Obtaining proofs of unsatisfiability Lemma: AX=B has no integral solution iff E-1B is not integral To obtain R, a proof of unsatisfiability: 1. Compute [E 0] 2. If E-1B is not integral: 2.1. E-1B[i] is not an integer. R’ = the i’th row in E-1 R’B is not an integer, R’A is integral Proof: AU = [E 0] E-1AU = E-1[E 0] = [I 0] Integral Integral E-1AUU-1= E-1A = [I 0] U-1

Proofs of Unsatisfiability for LMEs: Obtaining proofs of unsatisfiability Proofs of Unsatisfiability for LMEs: AX ≡m B Each equation ti ≡m bi can be written as an equi-satisfiable LDE ti + mvi = bi . New integer variable AX ≡m B is reduced to an equi-satisfiable system A’Z = B The proof of unsatisfiability is the same for both systems.

c1x1+c2x2+… + cnxn ≠ c0 Handling Disequations Disequations can also be represented by a matrix: CX ≠ D A system of equations and disequations: AX=B ^ CX ≠ D A system AX=B ^ CX ≠ D has no integral solution Iff AX=B ^ CX ≠ D has no rational solution or AX=B has no integral solution Theorem: Can be done in polynomial time Can be determined in polynomial time

Handling Disequations LDE LDD F=F1 ^ F2 and G=G1 ^ G2 If F^G is unsatisfiable because F1^F2^G1^G2 has no rational solution, an interpolant can be computed. If F^G is unsatisfiable because F1^G1 has no integral solution, an interpolant for F1^G1 can be computed.

V For LMD’s , the problem is NP-hard By reduction from 3-SAT: Handling Disequations For LMD’s , the problem is NP-hard By reduction from 3-SAT: Variables in 3-SAT: {z1, z2, …zi, …, zn} Two variables for zi: xi, xi’ One for zi, one for ¬zi Express the constraints: xi ≡4 0 and xi’ ≡4 1 or xi ≡4 1 and xi’ ≡4 0 V i ¬(xi ≡4 xi’) ¬(xi ≡4 2) ¬(xi ≡4 3) ¬(xi’ ≡4 2) ¬(xi’ ≡4 3) L1=

V V For each clause (u V v V w): ¬(u+v+w ≡4 0 ) Handling Disequations For each clause (u V v V w): ¬(u+v+w ≡4 0 ) This is only falsified when u,v,w are all assigned 0(mod 4) V clauses(u V v V w) ¬(u+v+w ≡4 0 ) L2= L=L1 L2 V The 3-SAT formula is satisfiable iff L is satisfiable.

Interpolants for LMEs, LDEs and LDDs can be computed in polynomial time using algebraic techniques The existing tools based on predicate abstraction and CEGAR can not discover the predicates computed by these techniques. Experimental results show that little unwinding is needed due to the early discovery of appropriate LMEs.

Toda Raba!

and R1AX=R1B is an interpolant. Handling Disequations If F^G is unsatisfiable because F1^F2^G1^G2 has no rational solution, an interpolant can be computed. Proof: Lemma: A system AX=B has no rational solution iff there exists a vector R such that RA=0 and RB≠0 If F^G is unsatisfiable because F1^F2 == AX=B^A’X=B’ has no rational solution, then R=[R1 R2] exists, and R1AX=R1B is an interpolant.

AX=B^A’X=B’ => Vcix, and R1AX=R1B is an interpolant.

להוריד שקף? Lemma: AX=B EX=F iff AX=B == false or E=RA and F=RB Lemma: Handling Disequations Rational row vector Lemma: AX=B EX=F iff AX=B == false or E=RA and F=RB Lemma: AX=B V(CiX=Di) iff AX=B CkX=Dk for some k להוריד שקף?