Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rahul Sharma, Aditya V. Nori, Alex Aiken Stanford MSR India Stanford.

Similar presentations


Presentation on theme: "Rahul Sharma, Aditya V. Nori, Alex Aiken Stanford MSR India Stanford."— Presentation transcript:

1 Rahul Sharma, Aditya V. Nori, Alex Aiken Stanford MSR India Stanford

2 int i = 1, j = 0; while (i<=5) { j = j+i ; i = i+1; } Increasing precision D. Monniaux and J. L. Guen. Stratified static analysis based on variable dependencies. Electr. Notes Theor. Comput. Sci. 2012

3 A. V. Nori and S. K. Rajamani. An empirical study of optimizations in YOGI. ICSE (1) 2010

4  Increased precision is causing worse results  Programs have unbounded behaviors  Program analysis  Analyze all behaviors  Run for a finite time  In finite time, observe only finite behaviors  Need to generalize

5  Generalization is ubiquitous  Abstract interpretation: widening  CEGAR: interpolants  Parameter tuning of tools  Lot of folk knowledge, heuristics, …

6  “It’s all about generalization”  Learn a function from observations  Hope that the function generalizes  Work on formalization of generalization

7  Model the generalization process  Probably Approximately Correct (PAC) model  Explain known observations by this model  Use this model to obtain better tools http://politicalcalculations.blogspot.com/2010/02/how-science-is-supposed-to-work.html

8 INTERPOLANTSCLASSIFIERS + Rahul Sharma, Aditya V. Nori, Alex Aiken: Interpolants as Classifiers. CAV 2012 + + + + - - - - -

9 + + + + + - - - - - c

10 + + + + + - - - - -

11 + + + + + - - - - - c

12

13 H For any arbitrary labeling + + - + + + -+

14 + + -- + + -- + + -- + + -- + + -- ++ + - - - - + + --+

15 Precision is low Underfitting Precision is high Overfitting Good fit Y X

16  Generalization error is bounded by sum of  Bias: Empirical error of best available hypothesis  Variance: O (VC-d) Bias Variance Increase precision Generalization error Possible hypotheses

17 int i = 1, j = 0; while (i<=5) { j = j+i ; i = i+1; }

18  What goes wrong with excess precision?  Fit polyhedra to program behaviors  Transfer functions, join, widening  Too many polyhedra, make a wrong choice

19 J. Henry, D. Monniaux, and M. Moy. Pagai: A path sensitive static analyser. Electr. Notes Theor. Comput. Sci. 2012.

20 A. V. Nori and S. K. Rajamani. An empirical study of optimizations in YOGI. ICSE (1) 2010

21  Parameter tuning of program analyses  Overfitting? Generalization on new tasks? P. Godefroid, A. V. Nori, S. K. Rajamani, and S. Tetali. Compositional may-must program analysis: unleashing the power of alternation. POPL 2010. Benchmark Set (2490 verification tasks) Train

22  How to set the test length in Yogi Benchmark Set (2490 verification tasks) Training Set (1743) Test Set (747) Train Test

23 350 500

24  On 2106 new verification tasks  40% performance improvement!  Yogi in production suffers from overfitting

25  Keep separate training and test sets  Design of the tools governed by training set  Test set as a check  SVCOMP: all benchmarks are public  Test tools on some new benchmarks too

26 R. Jhala and K. L. McMillan. A practical and complete approach to predicate refinement. TACAS 2006.  Suggests incrementally increasing precision  Find a sweet spot where generalization error is low

27

28  No generalization -> no bias-variance tradeoff  Certain classes of type inference  Abstract interpretation without widening  Loop-free and recursion-free programs  Verify a particular program (e.g., seL4)  Overfit on the one important program

29  A model to understand generalization  Bias-Variance tradeoffs  These tradeoffs do occur in program analysis  Understand these tradeoffs for better tools


Download ppt "Rahul Sharma, Aditya V. Nori, Alex Aiken Stanford MSR India Stanford."

Similar presentations


Ads by Google