ECI 2007: Specification and Verification of Object- Oriented Programs Lecture 5.

Slides:



Advertisements
Similar presentations
Automated Theorem Proving Lecture 1. Program verification is undecidable! Given program P and specification S, does P satisfy S?
Advertisements

Introduction to Algorithms 6.046J/18.401J/SMA5503
Solving LP Models Improving Search Special Form of Improving Search
Linear Programming (LP) (Chap.29)
On Solving Presburger and Linear Arithmetic with SAT Ofer Strichman Carnegie Mellon University.
Transportation Problem (TP) and Assignment Problem (AP)
Introduction to Algorithms
Chapter 6 Linear Programming: The Simplex Method
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Computational Methods for Management and Economics Carla Gomes Module 8b The transportation simplex method.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Linear Inequalities and Linear Programming Chapter 5
Linear programming Thomas S. Ferguson University of California at Los Angeles Compressive Sensing Tutorial PART 3 Svetlana Avramov-Zamurovic January 29,
The Simplex Method: Standard Maximization Problems
5.4 Simplex method: maximization with problem constraints of the form
Operation Research Chapter 3 Simplex Method.
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
Linear Programming.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Nikolaj Bjørner Microsoft Research Lecture 3. DayTopicsLab 1Overview of SMT and applications. SAT solving, Z3 Encoding combinatorial problems with Z3.
Computability and Complexity 16-1 Computability and Complexity Andrei Bulatov NP-Completeness.
Approximation Algorithms
The Theory of NP-Completeness
Chapter 10: Iterative Improvement
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
CSE 421 Algorithms Richard Anderson Lecture 27 NP Completeness.
ECI 2007: Specification and Verification of Object- Oriented Programs Lecture 4.
Black-box (oracle) Feed me a weighted graph G and I will tell you the weight of the max-weight matching of G.
5.6 Maximization and Minimization with Mixed Problem Constraints
D Nagesh Kumar, IIScOptimization Methods: M7L1 1 Integer Programming All Integer Linear Programming.
On Solving Presburger and Linear Arithmetic with SAT Ofer Strichman Carnegie Mellon University.
Leonardo de Moura Microsoft Research. Many approaches Graph-based for difference logic: a – b  3 Fourier-Motzkin elimination: Standard Simplex General.
Chapter 10: Iterative Improvement Simplex Method The Design and Analysis of Algorithms.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
1 Decision Procedures for Linear Arithmetic Presented By Omer Katz 01/04/14 Based on slides by Ofer Strichman.
Chapter 6 Linear Programming: The Simplex Method
Simplex Algorithm.Big M Method
Chapter 6 Linear Programming: The Simplex Method Section 2 The Simplex Method: Maximization with Problem Constraints of the Form ≤
ECE 556 Linear Programming Ting-Yuan Wang Electrical and Computer Engineering University of Wisconsin-Madison March
Topic III The Simplex Method Setting up the Method Tabular Form Chapter(s): 4.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 6.4 The student will be able to set up and solve linear programming problems.
Lecture 22 More NPC problems
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
4  The Simplex Method: Standard Maximization Problems  The Simplex Method: Standard Minimization Problems  The Simplex Method: Nonstandard Problems.
10/2 The simplex algorithm. In an augmented matrix, if a column has a 1 and all other entries 0, it is said to be ‘in solution’. The 1 is called a ‘pivot’
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
Gomory Cuts Updated 25 March Example ILP Example taken from “Operations Research: An Introduction” by Hamdy A. Taha (8 th Edition)“Operations Research:
1 Chapter 4 The Simplex Algorithm PART 2 Prof. Dr. M. Arslan ÖRNEK.
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
NPC.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
Approximation Algorithms based on linear programming.
1 Simplex algorithm. 2 The Aim of Linear Programming A Linear Programming model seeks to maximize or minimize a linear function, subject to a set of linear.
Decision Support Systems INF421 & IS Simplex: a linear-programming algorithm that can solve problems having more than two decision variables.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
The Theory of NP-Completeness
Richard Anderson Lecture 26 NP-Completeness
The minimum cost flow problem
(xy)(yz)(xz)(zy)
The Simplex Method: Standard Minimization Problems
The Simplex Method: Nonstandard Problems
Richard Anderson Lecture 28 NP-Completeness
L18 LP part4 Homework Review Summary Test 3.
Chapter 10: Iterative Improvement
Presentation transcript:

ECI 2007: Specification and Verification of Object- Oriented Programs Lecture 5

Arithmetic programs In addition, integer-valued variables with affine operations   Formula := A |   |    A  Atom := b | t = 0 | t > 0 | t  0 t  Term := c | x | t + t | t – t | ct b  SymBoolConst x  SymIntConst c  {…,-1,0,1,…}

Satisfiability modulo arithmetic A formula is a boolean combination of literals Each literal is a positive or negative atom Each atom is either a boolean variable or a linear constraint over integer variables

x  y  (a  z > 0)  (  a  x > y)  y + z  x b  x  y c  z > 0 d  x > y e  y + z  x b  (a  c)  (  a  d)  e

x  y  (a  z > 0)  (  a  x > y)  y + z  x b  x  y c  z > 0 d  x > y e  y + z  x b  (a  c)  (  a  d)  e Arithmetic Solver

x  y  (a  z > 0)  (  a  x > y)  y + z  x b  x  y c  z > 0 d  x > y e  y + z  x b  (a  c)  (  a  d)  e b = T, e = T Arithmetic Solver Satisfiable

x  y  (a  z > 0)  (  a  x > y)  y + z  x b  x  y c  z > 0 d  x > y e  y + z  x b  (a  c)  (  a  d)  e b = T, e = T a = F b = T, c = T, e = T Arithmetic Solver Unsatisfiable

x  y  (a  z > 0)  (  a  x > y)  y + z  x b  x  y c  z > 0 d  x > y e  y + z  x b  (a  c)  (  a  d)  e b = T, e = T b = T, d = T, e = T a = T Arithmetic Solver Unsatisfiable

Affine constraints A collection of m constraints over n variables: a 11 x 1 + a 12 x 2 + … + a 1n x n + c 1  0 a 21 x 1 + a 22 x 2 + … + a 2n x n + c 2  0 … a m1 x 1 + a m2 x 2 + … + a mn x n + c m  0 a 1 x 1 + a 2 x 2 + … + a n x n + c > 0a 1 x 1 + a 2 x 2 + … + a n x n + c-1  0 a 1 x 1 + a 2 x 2 + … + a n x n + c = 0 a 1 x 1 + a 2 x 2 + … + a n x n + c  0 (-a 1 ) x 1 + (-a 2 ) x 2 + … + (-a n x n ) + (-c)  0

Satisfiability problem for affine constraints A collection of m constraints over n variables: a 11 x 1 + a 12 x 2 + … + a 1n x n + c 1  0 a 21 x 1 + a 22 x 2 + … + a 2n x n + c 2  0 … a m1 x 1 + a m2 x 2 + … + a mn x n + c m  0 Does there exist an assignment of x 1, x 2, …, x n over the integers such that each constraint is satisfied ?

Solving affine constraints Integer linear programming –NP-complete Approximate integers by rationals/reals Linear programming –Polynomial time (Khachian 1978, Karmarkar 1984) Simplex algorithm (Dantzig 63) –exponential worst-case time –polynomial behavior in practice

Simplex Algorithm for Affine Constraints

Tableau x 1 x 2 … x n y 1  a 11 a 12 … a 1n c 1 y 2  a 21 a 22 … a 2n c 2 … y m  a m1 a m2 … a mn c m Read it as: y 1 = a 11 x 1 + a 12 x 2 + … + a 1n x n + c 1 y 2 = a 21 x 1 + a 22 x 2 + … + a 2n x n + c 2 … y m = a m1 x 1 + a m2 x 2 + … + a mn x n + c m y 1  0 y 2  0 … y m  0 Row variables Column variables

x – y + 1  0 x + y + 3  0 -x + -4  0 x y a  b  c 

x = 0 y = 0 a = 0 b = 0 c = 0

Sample point x 1 x 2 … x n y 1  a 11 a 12 … a 1n c 1 y 2  a 21 a 22 … a 2n c 2 … y m  a m1 a m2 … a mn c m x 1 = 0 x 2 = 0 … x n = 0 y 1 = c 1 y 2 = c 2 … y m = c m

A tableau is feasible if the sample point satisfies all sign constraints. Otherwise, drop a subset of sign constraints to get a feasible tableau. For each unsatisfied sign constraint: - Look for a different point satisfying the constraint while preserving existing constraints - If such a point is found, add the constraint - Otherwise, declare unsatisfiable Declare satisfiable

Pivot operation Exchange row i and column j: 1. Solve for x j y i = a i1 x 1 + … + a ij x j + … + a in x n + c i x j = (-1/aij) (a i1 x 1 + … + (-1)y i + … + a in x n + c i ) 2. Substitute in row k  i y k = a k1 x 1 + … + a kj x j + … + a kn x n + c k y k = (a k1 – a kj a i1 /a ij ) x 1 + … + (a kj /a ij ) y i + … + (a kn – a kj a in /a ij ) x n + (c k – a kj c i /a ij )

x 1 … x j … x n y 1  a 11 … a 1j … a 1n c 1 … y i  a i1 … a ij … a in c i … y m  a m1 … a mj … a mn c m x 1 … y i  … x n y 1  (a 11 – a 1j a i1 /a ij ) … (a 1j /a ij ) … (a 1n – a 1j a in /a ij ) (c 1 – a 1j c i /a ij ) … x j (- a i1 /a ij ) … (1/a ij ) … (- a in /a ij ) (-c i /a ij ) … y m  (a m1 – a mj a i1 /a ij ) … (a mj /a ij ) … (a mn – a mj a in /a ij ) (c m – a mj c i /a ij )

Observation A pivot operation preserves the solution set of any tableau.

x y a  b  c  x y a  b  c a  y x b  c a  b  x 1/2 1/2 -2 y -1/2 1/2 -1 c -1/2 -1/2 -2 Drop sign constraint for c Pivot a and x Pivot b and y

x = 0 y = 0 a = 0 b = 0 c = 0

Manifestly maximized row variable A row variable is manifestly maximized if every non-zero entry, other than the entry in the constant column, in its row is negative and lies in a column owned by a restricted variable. m  n  x y l l is manifestly maximized in the above tableau. - l is constrained to be at most y is not manifestly maximized in the above tableau.

Manifestly unbounded column variable A column variable is manifestly unbounded if every negative entry in its column is in a row owned by an unrestricted variable. x u  l  y z m  x is manifestly unbounded in the above tableau. - x can take arbitrarily large values. - u is not manifestly unbounded in the above tableau.

Given a feasible tableau T and a variable v, there is a sequence of pivot operations on T leading to a tableau T’ such that either 1. v is manifestly maximized in T’, or 2. v is manifestly unbounded in T’ Observation

Algorithm 1.Create initial tableau T with only those sign constraints that are satisfied by the sample point of T 2. If every row variable satisfies its sign constraint, return satisfiable 3. Pick a row k owned by variable y such that the sign constraint is not satisfied by the sample point of T 4. If y is manifestly maximized in T, return unsatisfiable 5. Pick a column j such that a kj is positive 6. If every restricted row has a non-negative entry in column j, perform Pivot(k,j). y becomes manifestly unbounded in T. Therefore, add the sign constraint for y. Go to (i, j) = ComputePivot(k) 8. Perform Pivot(T,i,j) 9. If the sample point of T satisfies the sign constraint for y, then add the sign constraint for y. Go to Go to 4

Observation If a row variable y is not manifestly maximized –either there is a positive entry in some column –or there is a negative entry in a column owned by an unrestricted variable

Algorithm 1.Create initial tableau T with only those sign constraints that are satisfied by the sample point of T 2. If every row variable satisfies its sign constraint, return satisfiable 3. Pick a row k owned by variable y such that the sign constraint is not satisfied by the sample point of T 4. If y is manifestly maximized in T, return unsatisfiable 5’. Pick a column j such that a kj is negative and the variable in column j is unrestricted. 6. If every restricted row has a non-positive entry in column j, perform Pivot(k,j). y becomes manifestly unbounded in T. Therefore, add the sign constraint for y. Go to (i, j) = ComputePivot(k) 8. Perform Pivot(T,i,j) 9. If the sample point of T satisfies the sign constraint for y, then add the sign constraint for y. Go to Go to 4

Pratt’s Algorithm for Difference Constraints

Difference constraints Three different kinds of constraints: x – y  c x  c -y  c - very common in program verification - satisfiability procedure more efficient than for general affine constraints - satisfiability procedure complete for integers

Reduction to a graph problem Introduce a new variable z to denote the value 0 x  c x - z  c -y  c z - y  c Variable xVertex x Constraint x – y  cEdge from y to x with weight c - Add a new vertex s. - Add an edge with weight 0 from s to every other vertex v.

Theorem The set of constraints is satisfiable iff there is no negative cycle in the graph.

Soundness If there is a negative cycle in the graph, the set of constraints is unsatisfiable. x 1 - x 2  c 1 x 2 - x 3  c 2 … x n - x 1  c n 0  c 1 + c 2 + … + c n < 0

Completeness If there is no negative cycle in the graph, the set of constraints is satisfiable.

Bellman-Ford algorithm d(s) := 0 for each vertex v  s: d(v) :=  for each vertex: for each edge (u,v): if d(v) > d(u) + weight(u,v) d(v) := d(u) + weight(u,v) for each edge (u,v): if d(v) > d(u) + weight(u,v) Graph contains a negative-weight cycle

Completeness If there is no negative cycle in the graph, then d(v) - d(u)  weight(u,v) for each edge (u,v). Model: Assign to variable x the value d(x) –d(z).