Download presentation
Presentation is loading. Please wait.
Published byMarlene Morgan Modified over 9 years ago
1
Program Verification and Synthesis using Templates over Predicate Abstraction Saurabh Srivastava # Sumit Gulwani * Jeffrey S. Foster # # University of Maryland, College Park * Microsoft Research, Redmond On leave advisor: Mike Hicks Saurabh Srivastava # Sumit Gulwani * Jeffrey S. Foster # # University of Maryland, College Park * Microsoft Research, Redmond On leave advisor: Mike Hicks
2
Sorted Verification: Proving Programs Correct Selection Sort: i j find min ij ij …
3
Verification: Proving Programs Correct Selection Sort: Sorted ij find min ij ij … for i = 0..n { for j = i..n { find min index } if (min != i) swap A[i] with A[min] }
4
Selection Sort: Verification: Proving Programs Correct for i = 0..n { for j = i..n { find min index } if (min != i) swap A[i] with A[min] } for i = 0..n { for j = i..n { find min index } if (min != i) swap A[i] with A[min] } Prove the following: Sortedness Output array is non-decreasing Permutation Output array contains all elements of the input Output array contains only elements from the input Number of copies of each element is the same ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] ∀ k ∃ j : 0≤k A old [k]=A[j] & 0≤j<n ∀ k ∃ j : 0≤k A[k]=A old [j] & 0≤j<n ∀ n : n ∊ A => count(A,n) = count(A old,n) ∀ n : n ∊ A => count(A,n) = count(A old,n)
5
Selection Sort: for i = 0..n { for j = i..n { find min index } if (min != i) swap A[i] with A[min] } for i = 0..n { for j = i..n { find min index } if (min != i) swap A[i] with A[min] } Verification: Proving Programs Correct Output array i<n j<n Find min index if (min != i) swap A[i], A[min] i:=0 j:=i
6
Selection Sort: Output array i<n j<n Find min index if (min != i) swap A[i], A[min] i:=0 j:=i Verification: Proving Programs Correct I outer I inner Sortedness Permutation Sortedness Permutation Output state true Input state We know from work in the ’70s (Hoare, Floyd, Dijkstra) that it is easy to reason logically about simple paths The difficult task is program state (invariant) inference: I inner, I outer, Input state, Output state
7
Selection Sort: Output array i<n j<n Find min index if (min != i) swap A[i], A[min] i:=0 j:=i Verification: Proving Programs Correct Sortedness Permutation Sortedness Permutation Output state true Input state We know from work in the ’70s (Hoare, Floyd, Dijkstra) that it is easy to reason logically about simple paths The difficult task is program state (invariant) inference: I inner, I outer, Input state, Output state ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] i < j i ≤ min < n ∀ k : 0≤k ∧ i≤k A[min] ≤A[k] ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] I outer I inner
8
User Input Program properties that we can verify/prove Full functional correctness No input Fully specified invariants Good Bad Tolerable Inferring program state (invariants)
9
User Input Program properties that we can verify/prove Full functional correctness No input Fully specified invariants Mind-blowing Undesirable Live with current technology Inferring program state (invariants)
10
Arbitrary quantification and arbitrary abstraction Alternating quantification ( ∀∃ ) Quantified facts ( ∀ ) Quantifier-free facts User Input Program properties that we can verify/prove Full functional correctness No input Fully specified invariants Inferring program state (invariants)
11
Arbitrary quantification and arbitrary abstraction Alternating quantification ( ∀∃ ) Quantified facts ( ∀ ) Quantifier-free facts User Input Program properties that we can verify/prove Full functional correctness No input Fully specified invariants Undecidable Inferring program state (invariants)
12
Arbitrary quantification and arbitrary abstraction Alternating quantification ( ∀∃ ) Quantified facts ( ∀ ) Quantifier-free facts User Input Program properties that we can verify/prove Full functional correctness No input Fully specified invariants Abstract interpretation Quantifier-free domain Abstract interpretation Quantifier-free domain Cousot etc Quantifier-free domain lifted to ∀ Quantifier-free domain lifted to ∀ Gulwani etc Best-effort inference for alternating quantification ( ∀∃ ) Best-effort inference for alternating quantification ( ∀∃ ) Kovacs etc Validating program paths using non-interactive theorem provers/SMT solvers Validating program paths using non-interactive theorem provers/SMT solvers HAVOC, Spec#, etc Validation using interactive theorem Proving Validation using interactive theorem Proving Jahob Inferring program state (invariants) Constraint-based Quantifier-free domain Constraint-based Quantifier-free domain Gulwani, Beyer etc Shape analysis SLAyer etc Program properties InferenceValidation
13
Arbitrary quantification and arbitrary abstraction Alternating quantification ( ∀∃ ) Quantified facts ( ∀ ) Quantifier-free facts User Input Program properties that we can verify/prove Full functional correctness No input Fully specified invariants Magical technique !! Magical technique !! ✗ Inferring program state (invariants) Program properties InferenceValidation
14
Arbitrary quantification and arbitrary abstraction Alternating quantification ( ∀∃ ) Quantified facts ( ∀ ) Quantifier-free facts User Input Program properties that we can verify/prove Full functional correctness No input Fully specified invariants Template-based invariant inference Template-based invariant inference Inferring program state (invariants) Program properties InferenceValidation
15
User Input Full functional correctness No input Template-based invariant inference Template-based invariant inference Small; intuitive input Inferring program state (invariants) Invariant Templates Predicates My task in this talk: Convince you that these small hints go a long way in what we can achieve with verification If you wanted to prove sortedness, i.e. each element is smaller than some others: If you wanted to prove permutation, i.e. for each element there exists another: Example: Take all variables and array elements in the program. Enumerate all pairs related by less than, greater than, equality etc.
16
Key facilitators: (1) Templates Templates – Intuitive Depending on the property being proved; form straight-forward – Simple Limited to the quantification and disjunction No complicated error reporting, iterative refining required Very different from full invariants used by some techniques – Helps: We don‘t have to define insane joins etc. Previous approaches: Domain D for the holes, construct D ∀ This approach: Work with only D Catch: Holes have different status: Unknown holes
17
Key facilitators: (2) Predicates Predicate Abstraction – Simple Small fact hypothesis – Nested deep under, the facts for the holes are simple E.g., ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] – Predicates 0≤k 1, k 1 <k 2, k 2 <n, and A[k 1 ] ≤A[k 2 ] are individually very simple – Intuitive Only program variables, array elements and relational operators Relational operator, ≥ … Relational operator, ≥ … Arithmetic operator +, - … Arithmetic operator +, - … Program variables, array elements Program variables, array elements Limited constants Limited constants – E.g., {i j, i≤j, i≥j, i j+1… } – Example: Enumerate predicates of the form:
18
VS 3 VS 3 : Verification and Synthesis using SMT Solvers User input – Templates – Predicates Set of techniques and tool implementing them Infers – Loop invariants – Weakest preconditions – Strongest postconditions
19
What VS 3 will let you do! A. Infer invariants with arbitrary quantification and boolean structure – ∀ : E.g. Sortedness – ∀∃ : E.g. Permutation.. ∀ k 1,k 2 k1k1 k2k2 j j.. A old ∀k∃j∀k∃j A new k Output array i<n j<n Find min index if (min != i) swap A[i], A[min] i:=0 j:=i
20
What VS 3 will let you do! k1k1 k2k2 < B. Infer weakest preconditions Weakest conditions on input? … No Swap! Output array i<n j<n Find min index if (min != i) swap A[i], A[min] i:=0 j:=i Worst case behavior: swap every time it can
21
Improves the state-of-art Can infer very expressive invariants Quantification – E.g. ∀ k ∃ j : (k (A[k]=B[j] ∧ j<n) Disjunction – E.g. ∀ k : (k n ∨ A[k]=0) Can infer weakest preconditions Good for debugging: can discover worst case inputs Good for analysis: can infer missing precondition facts
22
Verification part of the talk Three fixed-point inference algorithms – Iterative fixed-point Least Fixed-Point (LFP) Greatest Fixed-Point (GFP) – Constraint-based (CFP) Weakest Precondition Generation Experimental evaluation (LFP) (GFP) (CFP)
23
Verification part of the talk Three fixed-point inference algorithms – Iterative fixed-point Least Fixed-Point (LFP) Greatest Fixed-Point (GFP) – Constraint-based (CFP) Weakest Precondition Generation Experimental evaluation (LFP) (GFP) (CFP)
24
Output sorted array i<n j<n Find min index if (min != i) swap A[i], A[min] i:=0 j:=i Analysis Setup post pre I1I1 I2I2 Loop headers (with invariants) split program into simple paths. Simple paths induce program constraints (verification conditions) vc(pre,I 1 ) vc(I 1,post) vc(I 1,I 2 ) vc(I 2,I 2 ) vc(I 2,I 1 ) true ∧ i=0 => I 1 I 1 ∧ i≥n => sorted array I 1 ∧ i I 2 I1I1 I2I2 E.g. Selection Sort:
25
I 1 ∧ i≥n => ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] I 1 ∧ i≥n => ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] Analysis Setup post pre I1I1 I2I2 Loop headers (with invariants) split program into simple paths. Simple paths induce program constraints (verification conditions) vc(pre,I 1 ) vc(I 1,post) vc(I 1,I 2 ) vc(I 2,I 2 ) vc(I 2,I 1 ) true ∧ i=0 => I 1 I 1 ∧ i I 2 Simple FOL formulae over I 1, I 2 ! We will exploit this. Simple FOL formulae over I 1, I 2 ! We will exploit this.
26
Candidate Solution Iterative Fixed-point: Overview post pre I1I1 I2I2 vc(pre,I 1 ) vc(I 1,post) vc(I 1,I 2 ) vc(I 2,I 2 ) vc(I 2,I 1 ) { vc(pre,I 1 ),vc(I 1,I 2 ) } VCs that are not satisfied ✗✓ ✓ ✓ ✗ Values for invariants
27
Iterative Fixed-point: Overview post pre I1I1 I2I2 vc(pre,I 1 ) vc(I 1,post) vc(I 1,I 2 ) vc(I 2,I 2 ) vc(I 2,I 1 ) Improve candidate – Pick unsat VC – Use theorem prover to compute optimal change – Results in new candidates { … } Set of candidates Improve candidate Candidate satisfies all VCs
28
Forward Iterative (LFP) Forward iteration: Pick the destination invariant of unsat constraint to improve Start with Forward iteration: Pick the destination invariant of unsat constraint to improve Start with post pre ⊥ ⊥ ⊥ ⊥ ✓ ✓ ✓ ✗ ✓
29
Forward Iterative (LFP) Forward iteration: Pick the destination invariant of unsat constraint to improve Start with Forward iteration: Pick the destination invariant of unsat constraint to improve Start with post pre a a ⊥ ⊥ ✓ ✓ ✓ ✗ ✓
30
Forward Iterative (LFP) Forward iteration: Pick the destination invariant of unsat constraint to improve Start with Forward iteration: Pick the destination invariant of unsat constraint to improve Start with post pre a a b b ✓ ✓ ✓ ✗ ✓ How do we compute these improvements? `Optimal change’ procedure
31
Positive and Negative unknowns Given formula Φ Unknowns in have polarities: Positive/Negative – Value of positive unknown stronger => Φ stronger – Value of negative unknown stronger => Φ weaker Optimal Soln to Φ – Maximally strong positives, maximally weak negatives ¬ negative u1u1 positive ∨ u 2 ) ∧ u 3 ( positive ∀ x : ∃ y : Φ =
32
Optimal Change One approach (old way): ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] Lifted domain D ∀ x 1 :=e 1 ∀ k 1,k 2 : ?? => ?? Under-approximate Over-approximate Very difficult tx, join, widen functions required for the def n of domain D ∀ Domain D : quantifier-free facts
33
Optimal Change Our approach (new way): ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] x 1 :=e 1 ∀ k 1,k 2 : ?? => ??
34
Optimal Change Our approach (new way): ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] x 1 :=e 1 ∀ k 1,k 2 : ?? 1 => ?? 2 x 2 :=e 2 x 3 :=e 3 ∀ k 1,k 2 : ?? 3 => ?? 4 Suppose e i does not refer to x i
35
Optimal Change Our approach (new way): ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] ∧ x 1 =e 1 ∀ k 1,k 2 : ?? 1 => ?? 2 ∧ x 2 =e 2 ∧ x 3 =e 3 ∀ k 1,k 2 : ?? 3 => ?? 4 Suppose e i does not refer to x i =>
36
Optimal Change Our approach (new way): ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] ∧ x 1 =e 1 ∧ x 2 =e 2 ∧ x 3 =e 3 ∀ k 1,k 2 : ?? 1 => ?? 2 ∀ k 1,k 2 : ?? 3 => ?? 4 Find optimal solutions to unknowns ?? 1, ?? 2, ?? 3, ?? 4 => Unknowns have polarities therefore obey a monotonicity property Unknowns have polarities therefore obey a monotonicity property Trivial search will involve k x 2 |Predicates| queries to the SMT solver Algorithmic Novelty: This polynomial sized information can be aggregated efficiently to compute optimal sol n Algorithmic Novelty: This polynomial sized information can be aggregated efficiently to compute optimal sol n Key idea: Instantiate some unknowns and query SMT solver polynomial number of times
37
SMT Solver Query Time 413 queries 30 queries
38
Optimal Change Our approach (new way): ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] ∧ x 1 =e 1 ∧ x 2 =e 2 ∧ x 3 =e 3 ∀ k 1,k 2 : ?? 1 => ?? 2 ∀ k 1,k 2 : ?? 3 => ?? 4 Find optimal solutions to unknowns ?? 1, ?? 2, ?? 3, ?? 4 => Unknowns have polarities therefore obey a monotonicity property Unknowns have polarities therefore obey a monotonicity property Trivial search will involve k x 2 |Predicates| queries to the SMT solver Algorithmic Novelty: This polynomial sized information can be aggregated efficiently to compute optimal sol n Algorithmic Novelty: This polynomial sized information can be aggregated efficiently to compute optimal sol n Key idea: Instantiate some unknowns and query SMT solver polynomial number of times Efficient in practice
39
Backward Iterative Greatest Fixed-Point (GFP) Backward: Same as LFP except Pick the source invariant of unsat constraint to improve Start with Backward: Same as LFP except Pick the source invariant of unsat constraint to improve Start with
40
Constraint-based over Predicate Abstraction post pre I1I1 I2I2 vc(pre,I 1 ) vc(I 1,post) vc(I 1,I 2 ) vc(I 2,I 2 ) vc(I 2,I 1 )
41
Constraint-based over Predicate Abstraction vc(pre,I 1 ) vc(I 1,post ) vc(I 1,I 2 ) vc(I 2,I 2 ) vc(I 2,I 1 ) pred(I 1 ) template p 1,p 2 …p r unknown 1 unknown 2 predicates b 1 1,b 2 1 …b r 1 b 1 2,b 2 2 …b r 2 boolean indicators I1I1 Remember: VCs are FOL formulae over I 1, I 2 If we find assignments b j i = T/F Then we are done! If we find assignments b j i = T/F Then we are done!
42
Constraint-based over Predicate Abstraction vc(pre,I 1 ) vc(I 1,post ) vc(I 1,I 2 ) vc(I 2,I 2 ) vc(I 2,I 1 ) pred(I 1 ) pred(A) : A to unknown predicate indicator variables pred(A) : A to unknown predicate indicator variables Remember: VCs are FOL formulae over I 1, I 2
43
Constraint-based over Predicate Abstraction vc(pre,I 1 ) vc(I 1,post ) vc(I 1,I 2 ) vc(I 2,I 2 ) vc(I 2,I 1 ) boolc(pred(I 1 ), pred(I 2 )) pred(A) : A to unknown predicate indicator variables pred(A) : A to unknown predicate indicator variables boolc(pred(I 1 )) boolc(pred(I 2 ), pred(I 1 )) boolc(pred(I 2 )) Boolean constraint to satisfying sol n (SAT Solver) Boolean constraint to satisfying sol n (SAT Solver) Invariant sol n Invariant sol n ∧ boolc(pred(I 1 )) Program constraint to boolean constraint Program constraint to boolean constraint SAT formulae over predicate indicators Local reasoning Fixed-Point Computation
44
Verification part of the talk Three fixed-point inference algorithms – Iterative fixed-point Greatest Fixed-Point (GFP) Least Fixed-Point (LFP) – Constraint-based (CFP) Weakest Precondition Generation Experimental evaluation (LFP) (GFP) (CFP)
45
Computing maximally weak preconditions Iterative: – Backwards algorithm: Computes maximally-weak invariants in each step – Precondition generation using GFP: Output disjuncts of entry fixed-point in all candidates Constraint-based: – Generate one solution for precondition – Assert constraint that ensures strictly weaker pre – Iterate till unsat—last precondition generated is maximally weakest
46
Verification part of the talk Three fixed-point inference algorithms – Iterative fixed-point Greatest Fixed-Point (GFP) Least Fixed-Point (LFP) – Constraint-based (CFP) Weakest Precondition Generation Experimental evaluation – Why experiment? – Problem space is undecidable—if not the worst case, is the average, `difficult‘ case efficiently solvable? – We use SAT/SMT solvers—hoping that our instances will not incur their worst case exponential solving time (LFP) (GFP) (CFP)
47
Implementation C Program Templates Predicate Set CFG Invariants Preconditions Invariants Preconditions Microsoft’s Phoenix Compiler Verification Conditions Iterative Fixed-point GFP/LFP Constraint- based Fixed-Point Candidate Solutions Boolean Constraint Z3 SMT Solver
48
Verifying Sorting Algorithms Benchmarks – Considered difficult to verify – Require invariants with quantifiers – Sorting, ideal benchmark programs 5 major sorting algorithms – Insertion Sort, Bubble Sort (n 2 version and termination checking version), Selection Sort, Quick Sort and Merge Sort Properties: – Sort elements ∀ k : 0≤k A[k] ≤A[k+1] --- output array is non-decreasing – Maintain permutation, when elements distinct ∀ k ∃ j : 0≤k A[k]=A old [j] & 0≤j<n --- no elements gained ∀ k ∃ j : 0≤k A old [k]=A[j] & 0≤j<n --- no elements lost
49
Runtimes: Sortedness seconds Tool can prove sortedness for all sorting algorithms!
50
Runtimes: Permutation ∞ ∞ … 94.42 seconds …Permutations too!
51
Inferring Preconditions Given a property (worst case runtime or functional correctness) what is the input required for the property to hold Tool automatically infers non-trivial inputs/preconditions Worst case input (precondition) for sorting: – Selection Sort: sorted array except last element is the smallest – Insertion, Quick Sort, Bubble Sort (flag): Reverse sorted array Inputs (precondition) for functional correctness: – Binary search requires sorted input array – Merge in Merge sort requires sorted inputs – Missing initializers required in various other programs
52
Runtimes (GFP): Inferring worst case inputs for sorting seconds Tool infers worst case inputs for all sorting algorithms! Nothing to infer as all inputs yield the same behavior
53
User Input Full functional correctness No input Template-based invariant inference Template-based invariant inference Small; intuitive input Inferring program properties Invariant Templates Predicates Enumerate all possible predicate sets More burden on theorem prover Incrementally increase sophistication Loss of efficiency
54
Verification using Templates: Summary Powerful invariant inference over predicate abstraction – Can infer quantifier invariants Three algorithms with different strengths – LFP, GFP, CFP Extend to maximally-weak precondition inference – Worst case inputs and preconditions for functional correctness Techniques builds on SMT Solvers, so exploit their power Successfully verified/inferred preconditions – All major sorting algorithms and other difficult benchmarks
55
Verification Strongest Postcondition Inference (LFP) Strongest Postcondition Inference (LFP) Weakest Precondition Inference (GFP) Weakest Precondition Inference (GFP) Verification: Summary I outer I inner Output state Input state
56
Synthesis: Problem Statement I outer I inner Output state Input state ??
57
Looping structure; Resources I outer I inner Output state Input state ?? Resource Limits: Stack space Computational limits Resource Limits: Stack space Computational limits
58
Constraint Setup ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] ∧ x 1 =e 1 ∧ x 2 =e 2 ∧ x 3 =e 3 ∀ k 1,k 2 : ?? 1 => ?? 2 ∀ k 1,k 2 : ?? 3 => ?? 4 Find optimal solutions to unknowns ?? 1, ?? 2, ?? 3, ?? 4 => Verification: ∀ k 1,k 2 : 0≤k 1 A[k 1 ] ≤A[k 2 ] ∧ x 1 =?? 5 ∧ x 2 =?? 6 ∧ x 3 =?? 7 ∀ k 1,k 2 : ?? 1 => ?? 2 ∀ k 1,k 2 : ?? 3 => ?? 4 Find optimal solutions to unknowns ?? 1, ?? 2, ?? 3, ?? 4 => Synthesis: and ?? 5, ?? 6, ?? 7, ?? 8 ∀ k 1,k 2 : ?? 9 => ?? 10 ∧ x 1 =?? 5 ∧ x 2 =?? 6 ∧ x 3 =?? 7 ∀ k 1,k 2 : ?? 1 => ?? 2 ∀ k 1,k 2 : ?? 3 => ?? 4 Find optimal solutions to unknowns ?? 1, ?? 2, ?? 3, ?? 4 => Synthesis: and ?? 5, ?? 6, ?? 7, ?? 8 and sometimes ?? 9, ?? 10 Stack space Computation limits
59
Constraint Solving ∀ k 1,k 2 : ?? 9 => ?? 10 ∧ x 1 =?? 5 ∧ x 2 =?? 6 ∧ x 3 =?? 7 ∀ k 1,k 2 : ?? 1 => ?? 2 ∀ k 1,k 2 : ?? 3 => ?? 4 Find solutions to unknowns ?? 1, ?? 2, ?? 3, ?? 4 ?? 5, ?? 6, ?? 7, ?? 8 ?? 9, ?? 10 => Synthesis: Precondition Postcondition GFP LFP Too inefficient Ideal as it does piecewise reduction Constraint-based SAT solving fixed-point uses information from both bkwd and fwd
60
Example: Strassen’s Matrix Mult. Recursive Block matrix multiplication – Trivial solution is O(n 3 ) – Needs 8 multiplication; hence the O(n ln8 ) There exists a way to compute the same eight – In 7 multiplications! Will give O(n ln7 ) = O(n 2.81 ) Looping structure: Acyclic Looping structure: Acyclic Resources: 7 multiplications 7 vars to hold the intermediate results Resources: 7 multiplications 7 vars to hold the intermediate results = Strassen’s sol n : v 1 = (a 1 +a 4 )(b 1 +b 4 ) v 2 = (a 3 +a 4 )b 1 v 3 = a 1 (b 2 -b 4 ) v 4 = a 4 (b 3 -b 1 ) v 5 = (a 1 +a 2 )b 4 v 6 = (a 3 -a 1 )(b 1 +b 2 ) v 7 = (a 2 -a 4 )(b 3 +b 4 ) c 1 = v 1 +v 4 -v 5 +v 7 c 2 = v 3 +v 5 c 3 = v 2 +v 4 c 4 = v 1 +v 3 -v 2 +v 6 = a1a1 a2a2 a3a3 a4a4 b1b1 b2b2 b3b3 b4b4 c1c1 c2c2 c3c3 c4c4 Our tool synthesizes strassen’s sol n along with thousands others: - 7 multiplications decides the exponent - Constant factor decided by number of adds/subs - Add another resource limitations: <19 adds/subs - We get around 5-10 sol n Our tool synthesizes strassen’s sol n along with thousands others: - 7 multiplications decides the exponent - Constant factor decided by number of adds/subs - Add another resource limitations: <19 adds/subs - We get around 5-10 sol n Our tool can synthesize sorting algorithms too!!!
61
Synthesis: Result If you can give me a verification tool that can handle multiple positive and negative unknowns And additionally, generates maximally-weak solutions to negative unknowns Then It can be converted into a synthesis tool!
62
Conclusion ?? Verification Strongest Postcondition Inference (LFP) Strongest Postcondition Inference (LFP) Weakest Precondition Inference (GFP) Weakest Precondition Inference (GFP) I outer I inner Output state Input state Program Synthesis VS 3 : http://www.cs.umd.edu/~saurabhs/pacshttp://www.cs.umd.edu/~saurabhs/pacs Discuss: saurabhs@cs.umd.edusaurabhs@cs.umd.edu Catch me after the talk! I’m here till Tuesday afternoon
64
Outline Three fixed-point inference algorithms – Iterative fixed-point Greatest Fixed-Point (GFP) Least Fixed-Point (LFP) – Constraint-based (CFP) Optimal Solutions – Built over a clean theorem proving interface Weakest Precondition Generation Experimental evaluation (LFP) (GFP) (CFP)
65
Optimal Solutions Key: Polarity of unknowns in formula Φ – Positive or negative: Value of positive unknown stronger => Φ stronger Value of negative unknown stronger => Φ weaker Optimal Soln: Maximally strong positives, maximally weak negatives Assume theorem prover interface: OptNegSol – Optimal solutions for formula with only negative unknowns – Built using a lattice search by querying SMT Solver ¬ negative u1u1 positive ∨ u 2 ) ∧ u 3 ( positive ∀ x : ∃ y : Φ =
66
Optimal Solutions using OptNegSol formula Φ contains unknowns: Repeat until set stable Optimal Solutions for formula Φ Optimal Solutions for formula Φ Opt Opt’ Opt’’ Merge … Merge positive tuples Φ[] ] ] u 1 …u P positive u 1 …u N negative … a 1 …a P a’ 1 …a’ P a’’ 1 …a’’ P P-tuple that assigns a single predicate to each positive unknown P-tuple that assigns a single predicate to each positive unknown a 1 …a P,S 1 …S N a’ 1 …a’ P,S’ 1 …S’ N … S i sol n for the negative unknowns S i sol n for the negative unknowns a’’ 1 …a’’ P,S’’ 1 …S’’ N OptNegSol … P x Size of predicate set
67
Possible because: – Templates make ∨ explicit – Predicate abstraction limits values to predicate subsets Iterative Fixed-point post pre I1I1 I2I2 Algorithm: Maintain candidate solutions Pick candidate and its unsat program constraint vc(src,dst) Improve candidate – Using unsat constraint – Use theorem prover to compute optimal change – Compute new candidates Iterate vc(pre,I 1 ) vc(I 1,post) vc(I 1,I 2 ) vc(I 2,I 2 ) vc(I 2,I 1 )
68
Backwards Iterative (GFP) post pre Backward: Always pick the source invariant of unsat constraint to improve Start with Backward: Always pick the source invariant of unsat constraint to improve Start with Candidate Sols Unsat constraints { vc(I 1,post) } ⊤ ⊤ ⊤ ⊤ ✓ ✓ ✓ ✗ ✓
69
Backwards Iterative (GFP) post pre ✓ ✓ Backward: Always pick the source invariant of unsat constraint to improve Start with Backward: Always pick the source invariant of unsat constraint to improve Start with Candidate Sols Unsat constraints { vc(I 1,post) } { vc(I 2,I 1 ) } ✓ Optimally strengthen so vc(pre,I 1 ) ok unless no sol n ✗ ✓
70
Backwards Iterative (GFP) post pre ✓ ✓ Backward: Always pick the source invariant of unsat constraint to improve Start with Backward: Always pick the source invariant of unsat constraint to improve Start with Candidate Sols Unsat constraints { vc(I 1,post) } { vc(I 2,I 1 ) } { vc(I 2,I 2 ) } { vc(I 2,I 2 ),vc(I 1,I 2 ) } Multiple orthogonal optimal sols ✓ Optimally strengthen so vc(pre,I 1 ) ok unless no sol n ✗ ✗
71
Backwards Iterative (GFP) post pre ✓ ✓ ✓ Backward: Always pick the source invariant of unsat constraint to improve Start with Backward: Always pick the source invariant of unsat constraint to improve Start with Candidate Sols Unsat constraints { vc(I 1,post) } { vc(I 2,I 1 ) } { vc(I 2,I 2 ) } { vc(I 2,I 2 ),vc(I 1,I 2 ) } { vc(I 2,I 2 ) } { vc(I 1,I 2 ) } Multiple orthogonal optimal sols ✓ Optimally strengthen so vc(pre,I 1 ) ok unless no sol n ✗
72
Backwards Iterative (GFP) post pre ✓ ✓ ✓ ✓ Backward: Always pick the source invariant of unsat constraint to improve Start with Backward: Always pick the source invariant of unsat constraint to improve Start with Candidate Sols Unsat constraints { vc(I 1,post) } { vc(I 2,I 1 ) } { vc(I 2,I 2 ) } { vc(I 2,I 2 ),vc(I 1,I 2 ) } { vc(I 2,I 2 ) } { vc(I 1,I 2 ) } { vc(I 2,I 2 ) } none Multiple orthogonal optimal sols ✓ Optimally strengthen so vc(pre,I 1 ) ok unless no sol n : GFP solution
73
Forward Iterative (LFP) post pre ⊥ ⊥ ✓ ✓ ✓ ✗ ✓ Forward: Pick the destination invariant of unsat constraint to improve Start with Forward: Pick the destination invariant of unsat constraint to improve Start with Candidate Sols Unsat constraints vc(pre,I 1 )
74
Forward Iterative (LFP) post pre a1a1 ⊥ ✓ ✓ Forward: Pick the destination invariant of unsat constraint to improve Start with Forward: Pick the destination invariant of unsat constraint to improve Start with Candidate Sols Unsat constraints vc(pre,I 1 ) vc(I 1,I 2 ) ✓ ✗ ✓ Optimally weakened so vc(I 1,post) ok unless no sol n
75
Forward Iterative (LFP) post pre a1a1 b3b3 ✓ ✓ Forward: Pick the destination invariant of unsat constraint to improve Start with Forward: Pick the destination invariant of unsat constraint to improve Start with Candidate Sols Unsat constraints vc(pre,I 1 ) vc(I 1,I 2 ) vc(I 2,I 2 ), vc(I 2,I 1 ), vc(I 2,I 2 ),vc(I 2,I 1 ) Multiple orthogonal optimal sols ✓ ✗ ✗ Optimally weakened so vc(I 1,post) ok unless no sol n
76
Forward Iterative (LFP) post pre a1a1 b’ 3 ✓ ✓ ✓ Forward: Pick the destination invariant of unsat constraint to improve Start with Forward: Pick the destination invariant of unsat constraint to improve Start with Candidate Sols Unsat constraints vc(pre,I 1 ) vc(I 1,I 2 ) vc(I 2,I 2 ), vc(I 2,I 1 ), vc(I 2,I 2 ),vc(I 2,I 1 ) vc(I 2,I 1 ), vc(I 2,I 1 ) Multiple orthogonal optimal sols subsumed by ✓ ✗ Optimally weakened so vc(I 1,post) ok unless no sol n
77
Forward Iterative (LFP) post pre a’ 1 b’ 3 ✓ ✓ ✓ ✓ Forward: Pick the destination invariant of unsat constraint to improve Start with Forward: Pick the destination invariant of unsat constraint to improve Start with Candidate Sols Unsat constraints vc(pre,I 1 ) vc(I 1,I 2 ) vc(I 2,I 2 ), vc(I 2,I 1 ), vc(I 2,I 2 ),vc(I 2,I 1 ) vc(I 2,I 1 ), vc(I 2,I 1 ) none Multiple orthogonal optimal sols subsumed by ✓ Optimally weakened so vc(I 1,post) ok unless no sol n
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.