Noam Rinetzky Lecture 13: Numerical, Pointer & Shape Domains

Slides:



Advertisements
Similar presentations
Abstract Interpretation Part II
Advertisements

Continuing Abstract Interpretation We have seen: 1.How to compile abstract syntax trees into control-flow graphs 2.Lattices, as structures that describe.
3-Valued Logic Analyzer (TVP) Tal Lev-Ami and Mooly Sagiv.
1 Lecture 07 – Shape Analysis Eran Yahav. Previously  LFP computation and join-over-all-paths  Inter-procedural analysis  call-string approach  functional.
1 Lecture 08(a) – Shape Analysis – continued Lecture 08(b) – Typestate Verification Lecture 08(c) – Predicate Abstraction Eran Yahav.
1 Basic abstract interpretation theory. 2 The general idea §a semantics l any definition style, from a denotational definition to a detailed interpreter.
Common Sub-expression Elim Want to compute when an expression is available in a var Domain:
Program analysis Mooly Sagiv html://
1 Iterative Program Analysis Part I Mooly Sagiv Tel Aviv University Textbook: Principles of Program.
1 Iterative Program Analysis Mooly Sagiv Tel Aviv University Textbook: Principles of Program Analysis.
Abstract Interpretation Part I Mooly Sagiv Textbook: Chapter 4.
Data Flow Analysis Compiler Design Nov. 8, 2005.
1 Program Analysis Systematic Domain Design Mooly Sagiv Tel Aviv University Textbook: Principles.
Direction of analysis Although constraints are not directional, flow functions are All flow functions we have seen so far are in the forward direction.
Program Analysis Mooly Sagiv Tel Aviv University Sunday Scrieber 8 Monday Schrieber.
Static Program Analysis via Three-Valued Logic Thomas Reps University of Wisconsin Joint work with M. Sagiv (Tel Aviv) and R. Wilhelm (U. Saarlandes)
1 Program Analysis Mooly Sagiv Tel Aviv University Textbook: Principles of Program Analysis.
1 Tentative Schedule u Today: Theory of abstract interpretation u May 5 Procedures u May 15, Orna Grumberg u May 12 Yom Hatzamaut u May.
Dagstuhl Seminar "Applied Deductive Verification" November Symbolically Computing Most-Precise Abstract Operations for Shape.
Program Analysis and Verification Noam Rinetzky Lecture 10: Shape Analysis 1 Slides credit: Roman Manevich, Mooly Sagiv, Eran Yahav.
1 Iterative Program Analysis Abstract Interpretation Mooly Sagiv Tel Aviv University Textbook:
Program Analysis and Verification Spring 2014 Program Analysis and Verification Lecture 11: Abstract Interpretation III Roman Manevich Ben-Gurion University.
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 14: Numerical Abstractions Roman Manevich Ben-Gurion University.
Program Analysis and Verification Spring 2014 Program Analysis and Verification Lecture 14: Numerical Abstractions Roman Manevich Ben-Gurion University.
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 9: Abstract Interpretation I Roman Manevich Ben-Gurion University.
Symbolically Computing Most-Precise Abstract Operations for Shape Analysis Greta Yorsh Thomas Reps Mooly Sagiv Tel Aviv University University of Wisconsin.
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 12: Abstract Interpretation IV Roman Manevich Ben-Gurion University.
Program Analysis Mooly Sagiv Tel Aviv University Sunday Scrieber 8 Monday Schrieber.
1 Shape Analysis via 3-Valued Logic Mooly Sagiv Tel Aviv University Shape analysis with applications Chapter 4.6
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 13: Abstract Interpretation V Roman Manevich Ben-Gurion University.
1 Combining Abstract Interpreters Mooly Sagiv Tel Aviv University
Program Analysis and Verification Spring 2014 Program Analysis and Verification Lecture 12: Abstract Interpretation IV Roman Manevich Ben-Gurion University.
Program Analysis and Verification
Program Analysis and Verification
1 Iterative Program Analysis Abstract Interpretation Mooly Sagiv Tel Aviv University Textbook:
1 Numeric Abstract Domains Mooly Sagiv Tel Aviv University Adapted from Antoine Mine.
Operational Semantics Mooly Sagiv Tel Aviv University Sunday Scrieber 8 Monday Schrieber.
Program Analysis Mooly Sagiv Tel Aviv University Sunday Scrieber 8 Monday Schrieber.
Program Analysis and Verification Noam Rinetzky Lecture 8: Abstract Interpretation 1 Slides credit: Roman Manevich, Mooly Sagiv, Eran Yahav.
1 Iterative Program Analysis Part II Mathematical Background Mooly Sagiv Tel Aviv University
Chaotic Iterations Mooly Sagiv Tel Aviv University Textbook: Principles of Program Analysis.
Chaotic Iterations Mooly Sagiv Tel Aviv University Textbook: Principles of Program Analysis.
Program Analysis and Verification
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 8: Static Analysis II Roman Manevich Ben-Gurion University.
Program Analysis and Verification Spring 2014 Program Analysis and Verification Lecture 15: Alias Analysis Roman Manevich Ben-Gurion University.
Spring 2017 Program Analysis and Verification
Program Analysis and Verification
Spring 2016 Program Analysis and Verification
Spring 2016 Program Analysis and Verification
Textbook: Principles of Program Analysis
Spring 2016 Program Analysis and Verification
Program Analysis and Verification
Spring 2016 Program Analysis and Verification
Pointer Analysis Lecture 2
Program Analysis and Verification
G. Ramalingam Microsoft Research, India & K. V. Raghavan
Symbolic Implementation of the Best Transformer
Iterative Program Analysis Abstract Interpretation
Program Analysis and Verification
Program Analysis and Verification
Program Analysis and Verification
Program Analysis and Verification
Program Analysis and Verification
Parametric Shape Analysis via 3-Valued Logic
Program Analysis and Verification
Program Analysis and Verification
((a)) A a and c C ((c))
Symbolic Characterization of Heap Abstractions
Spring 2016 Program Analysis and Verification
Presentation transcript:

Noam Rinetzky Lecture 13: Numerical, Pointer & Shape Domains Program Analysis and Verification 0368-4479 http://www.cs.tau.ac.il/~maon/teaching/2013-2014/paav/paav1314b.html Noam Rinetzky Lecture 13: Numerical, Pointer & Shape Domains Slides credit: Roman Manevich, Mooly Sagiv, Eran Yahav, Ganesan Ramalingam

Abstract Interpretation [Cousot’77] Mathematical foundation of static analysis Abstract domains Abstract states Join () Transformer functions Abstract steps Chaotic iteration Abstract computation Structured Programs Lattices (D, , , , , ) Monotonic functions Fixpoints

Abstract (conservative) interpretation representation abstract representation statement S abstract semantics concretization concretization set of states set of states set of states statement S  operational semantics (concrete semantics)

The collecting lattice Lattice for a given control-flow node v: Lv=(2State, , , , , State) Lattice for entire control-flow graph with nodes V: LCFG = Map(V, Lv) We will use this lattice as a baseline for static analysis and define abstractions of its elements

Equational definition of the semantics R[2] = R[entry]  x:=x-1 R[3] R[3] = R[2]  {s | s(x) > 0} R[exit] = R[2]  {s | s(x)  0} A system of recursive equations Solution: concrete meaning of the program R[entry] entry R[2] if x > 0 R[exit] R[3] exit x := x - 1

An abstract semantics R[2] = R[entry]  x:=x-1# R[3] Abstract transformer for x:=x-1 R[2] = R[entry]  x:=x-1# R[3] R[3] = R[2]  {s | s(x) > 0}# R[exit] = R[2]  {s | s(x)  0}# A system of recursive equations Solution: abstract meaning of the program Over-approximation of the concrete semantics Abstract representation of {s | s(x) < 0} R[entry] entry R[2] if x > 0 R[exit] R[3] exit x := x - 1

Galois Connection: c  ((c)) The most precise (least) element in A representing c  3 ((c))  2 (c) c  1

Monotone functions Let L1=(D1, ) and L2=(D2, ) be two posets A function f : D1  D2 is monotone if for every pair x, y  D1 x  y implies f(x)  f(y) A special case: L1=L2=(D, ) f : D  D

Fixed-points L = (D, , , , , ) f : D  D monotone Fix(f) = { d | f(d) = d } Red(f) = { d | f(d)  d } Ext(f) = { d | d  f(d) } Theorem [Tarski 1955] lfp(f) = Fix(f) = Red(f)  Fix(f) gfp(f) = Fix(f) = Ext(f)  Fix(f) Red(f) fn() gfp Fix(f) lfp Ext(f) fn()  A solution always exist It unique Not always computable

Continuity and ACC condition Let L = (D, , , ) be a complete partial order Every ascending chain has an upper bound A function f is continuous if for every increasing chain Y  D*, f(Y) = { f(y) | yY } L satisfies the ascending chain condition (ACC) if every ascending chain eventually stabilizes: d0  d1  …  dn = dn+1 = …

Fixed-point theorem [Kleene] Let L = (D, , , ) be a complete partial order and a continuous function f: D  D then lfp(f) = nN fn() Lemma: Monotone functions on posets satisfying ACC are continuous

Mathematical definition Resulting algorithm  Kleene’s fixed point theorem gives a constructive method for computing the lfp Mathematical definition lfp(f) = nN fn() lfp fn() Algorithm … d :=  while f(d)  d do d := d  f(d) return d f2() f() 

Sound abstract transformer Given two lattices C = (DC, C, C, C, C, C) A = (DA, A, A, A, A, A) and GCC,A=(C, , , A) with A concrete transformer f : DC DC an abstract transformer f# : DA DA We say that f# is a sound transformer (w.r.t. f) if c: (f(c))  f#((c)) a: (f((a))) A f#(a)

Soundness Given two complete lattices C = (DC, C, C, C, C, C) A = (DA, A, A, A, A, A) and GCC,A=(C, , , A) with Monotone concrete transformer f : DC DC Monotone abstract transformer f# : DA DA s Either a DA : f((a))  (f#(a)) or c DC : (f(c))  f#((c)) Then lfp(f)  (lfp(f#)) and (lfp(f))  lfp(f#)

Best (induced) transformer [CC’77] f#(a)= (f((a))) C A 4 f f# 3 1 2 Problem:  incomputable directly

Fixed-point theorem [Kleene] Let L = (D, , , ) be a complete partial order and a continuous function f: D  D then lfp(f) = nN fn() Lemma: Monotone functions on posets satisfying ACC are continuous What if ACC does not hold?

Intervals lattice for variable x [-,+] ... [-,-1] [-,-1] [-,0] [0,+] [1,+] [2,+] ... [-20,10] [-10,10] ... [-2,-1] [-1,0] [0,1] [1,2] [2,3] ... ... [-2,-2] [-1,-1] [0,0] [1,1] [2,2] ... 

Intervals lattice for variable x Dint[x] = { (L,H) | L-,Z and HZ,+ and LH}  =[-,+]  = ? [1,2]  [3,4] ? [1,4]  [1,3] ? [1,3]  [1,4] ? [1,3]  [-,+] ? What is the lattice height?

Joining/meeting intervals [a,b]  [c,d] = [min(a,c), max(b,d)] [1,1]  [2,2] = [1,2] [1,1]  [2,+] = [1,+] [a,b]  [c,d] = [max(a,c), min(b,d)] if a proper interval and otherwise  [1,2]  [3,4] =  [1,4]  [3,4] = [3,4] [1,1]  [1,+] = [1,1] Check that indeed xy if and only if xy=y

Interval domain for programs Dint[x] = { (L,H) | L-,Z and HZ,+ and LH} For a program with variables Var={x1,…,xk} Dint[Var] = Dint[x1]  …  Dint[xk] How can we represent it in terms of formulas? Two types of factoids xc and xc Example: S = {x9, y5, y10} Helper operations c + + = + remove(S, x) = S without any x-constraints lb(S, x) = k if k ≤ x ≤ m ub(S,x) = m if k ≤ x ≤ m

Assignment transformers x := c# S = remove(S,x)  {xc, xc} x := y# S = remove(S,x)  {xlb(S,y), xub(S,y)} x := y+c# S = remove(S,x)  {xlb(S,y)+c, xub(S,y)+c} x := y+z# S = remove(S,x)  {xlb(S,y)+lb(S,z), xub(S,y)+ub(S,z)} x := y*c# S = remove(S,x)  if c>0 {xlb(S,y)*c, xub(S,y)*c} else {xub(S,y)*-c, xlb(S,y)*-c} x := y*z# S = remove(S,x)  ?

assume transformers assume x=c# S = S  {xc, xc} assume x=y# S = S  {xlb(S,y), xub(S,y)} assume xc# S = (S  {xc-1})  (S  {xc+1})

Too many iterations to converge

Analysis with widening Red(f) f#2  f#3 lpf(f#)  Fix(f) f#3 f#2  f#  

Analysis with narrowing f#2  f#3 Red(f) lpf(f#)  Fix(f) f#3 f#2  f#  

Overview Goal: infer numeric properties of program variables (integers, floating point) Applications Detect division by zero, overflow, out-of-bound array access Help non-numerical domains Classification Non-relational (Weakly-)relational Equalities / Inequalities Linear / non-linear Exotic

Implementation

Non-relational abstractions Abstract each variable individually Constant propagation [Kildall’73] Sign Parity (congruences) Intervals (Box)

Sign abstraction for variable x Concrete lattice: C = (2State, , , , , State) Sign = {, neg, 0, pos, } GCC,Sign=(C, , , Sign) () = ? (neg) = ? (0) = ? (pos) = ? () = ? How can we represent 0?  neg pos 

Transformer x:=y+z  pos neg 

Parity abstraction for variable x Concrete lattice: C = (2State, , , , , State) Parity = {, E, O, } GCC,Parity=(C, , , Parity) () = ? (E) = ? (O) = ? () = ?  E O 

Transformer x:=y+z  O E 

Boxes (intervals) y 6 5 y  [3,6] 4 3 2 1 1 2 3 4 x x  [1,4]

Non-relational abstractions Cannot prove properties that hold simultaneous for several variables x = 2*y x ≤ y

Zone abstraction [Mine] Maintain bounded differences between a pair of program variables (useful for tracking array accesses) Abstract state is a conjunction of linear inequalities of the form x-yc y 6 x ≤ 4 −x ≤ −1 y ≤ 3 −y ≤ −1 x − y ≤ 1 5 4 3 2 1 1 2 3 4 x

Difference bound matrices Add a special V0 variable for the number 0 Represent non-existent relations between variables by + entries Convenient for defining the partial order between two abstract elements… =? x ≤ 4 −x ≤ −1 y ≤ 3 −y ≤ −1 x − y ≤ 1 y x V0 3 4 + -1 1

Difference bound matrices Add a special V0 variable for the number 0 Represent non-existent relations between variables by + entries Convenient for defining the partial order between two abstract elements… =? x ≤ 4 −x ≤ −1 y ≤ 3 −y ≤ −1 x − y ≤ 1 y x V0 3 4 + -1 1

Ordering DBMs How should we order M1  M2? x ≤ 4 −x ≤ −1 y ≤ 3 −y ≤ −1 V0 3 4 + -1 1 M1 = x ≤ 5 −x ≤ −1 y ≤ 3 x − y ≤ 1 y x V0 3 5 + -1 1 M2 =

Widening DBMs How should we join M1  M2? x ≤ 4 −x ≤ −1 y ≤ 3 −y ≤ −1 V0 3 4 + -1 1 M1 = x ≤ 2 −x ≤ −1 y ≤ 0 x − y ≤ 1 y x V0 2 + -1 1 M2 =

Widening DBMs How should we widen M1  M2? x ≤ 4 −x ≤ −1 y ≤ 3 −y ≤ −1 V0 3 4 + -1 1 M1 = x ≤ 5 −x ≤ −1 y ≤ 3 x − y ≤ 1 y x V0 3 5 + -1 1 M2 =

Can we tell whether a system of constraints is satisfiable? Potential graph A vertex per variable A directed edge with the weight of the inequality Enables computing semantic reduction by shortest-path algorithms x ≤ 4 −x ≤ −1 y ≤ 3 −y ≤ −1 x − y ≤ 1 V0 3 -1 -1 3 x y 1 Can we tell whether a system of constraints is satisfiable?

Semantic reduction for zones Apply the following rule repeatedly x - y ≤ c y - z ≤ d x - z ≤ c+d When should we stop? Theorem 3.3.4. Best abstraction of potential sets and zones m∗ = (Pot ◦ Pot)(m) A word of caution: do not apply widening on top of semantic reduction (see 3.7.2)

Octagon abstraction [Mine-01] Abstract state is an intersection of linear inequalities of the form x y c captures relationships common in programs (array access)

Some inequality-based relational domains policy iteration

Polyhedral Abstraction abstract state is an intersection of linear inequalities of the form a1x2+a2x2+…anxn  c represent a set of points by their convex hull (image from http://www.cs.sunysb.edu/~algorith/files/convex-hull.shtml)

Operations on Polyhedra

Equality-based domains Simple congruences [Granger’89]: y=a mod k Linear relations: y=a*x+b Join operator a little tricky Linear equalities [Karr’76]: a1*x1+…+ak*xk = c Polynomial equalities: a1*x1d1*…*xkdk + b1*y1z1*…*ykzk + … = c Some good results are obtainable when d1+…+dk <n for some small n

Pointer Analysis

Constant propagation example y = 4; z = x + 5;

Constant propagation example with pointers z = x + 5; Is x always 3 here?

Constant propagation example with pointers pointers affect most program analyses p = &y; x = 3; *p = 4; z = x + 5; if (?) p = &x; else p = &y; x = 3; *p = 4; z = x + 5; p = &x; x = 3; *p = 4; z = x + 5; x is always 3 x is always 4 x may be 3 or 4 (i.e., x is unknown in our lattice)

Constant propagation example with pointers p = &y; x = 3; *p = 4; z = x + 5; if (?) p = &x; else p = &y; x = 3; *p = 4; z = x + 5; p = &x; x = 3; *p = 4; z = x + 5; p always points-to y p always points-to x p may point-to x or y

Points-to Analysis Determine the set of targets a pointer variable could point-to (at different points in the program) “p points-to x” “p stores the value &x” “*p denotes the location x” targets could be variables or locations in the heap (dynamic memory allocation) p = &x; p = new Foo(); or p = malloc (…); must-point-to vs. may-point-to

Constant propagation example with pointers Can *p denote the same location as *q? *q = 3; *p = 4; z = *q + 5; what values can this take?

More terminology *p and *q are said to be aliases (in a given concrete state) if they represent the same location Alias analysis Determine if a given pair of references could be aliases at a given program point *p may-alias *q *p must-alias *q

Pointer Analysis Points-To Analysis Alias Analysis may-point-to must-point-to Alias Analysis may-alias must-alias

Applications Compiler optimizations Verification & Bug Finding Method de-virtualization Call graph construction Allocating objects on stack via escape analysis Verification & Bug Finding Datarace detection Use in preliminary phases Use in verification itself

Points-to analysis: a simple example We will usually drop variable-equality information p = &x; q = &y; if (?) { q = p; } x = &a; y = &b; z = *q; {p=&x} {p=&x  q=&y} {p=&x  q=&x} {p=&x  (q=&y  q=&x)} {p=&x  (q=&y  q=&x)  x=&a} {p=&x  (q=&y  q=&x)  x=&a  y=&b} {p=&x  (q=&yq=&x)  x=&a  y=&b  (z=xz=y)} How would you construct an abstract domain to represent these abstract states?

Points-to lattice Points-to PT-factoids[x] = { x=&y | y  Var}  false PT[x] = (2PT-factoids, , ,  , false, PT-factoids[x]) (interpreted disjunctively) How should combine them to get the abstract states in the example? {p=&x  (q=&yq=&x)  x=&a  y=&b}

Points-to lattice Points-to PT-factoids[x] = { x=&y | y  Var}  false PT[x] = (2PT-factoids, , ,  , false, PT-factoids[x]) (interpreted disjunctively) How should combine them to get the abstract states in the example? {p=&x  (q=&yq=&x)  x=&a  y=&b} D[x] = Disj(VE[x])  Disj(PT[x]) For all program variables: D = D[x1]  …  D[xk]

Points-to analysis How should we handle this statement? Strong update a = &y x = &a; y = &b; if (?) { p = &x; } else { p = &y; } *x = &c; *p = &c; How should we handle this statement? Strong update {x=&a  y=&b  (p=&xp=&y)  a=&y} {x=&a  y=&b  (p=&xp=&y)  a=&c} {(x=&ax=&c)  (y=&by=&c)  (p=&xp=&y)} Weak update Weak update

Questions When is it correct to use a strong update? A weak update? Is this points-to analysis precise? What does it mean to say p must-point-to x at program point u p may-point-to x at program point u p must-not-point-to x at program u p may-not-point-to x at program u

Points-to analysis, formally We must formally define what we want to compute before we can answer many such questions

PWhile syntax A primitive statement is of the form x := null x := y skip (where x and y are variables in Var) Omitted (for now) Dynamic memory allocation Pointer arithmetic Structures and fields Procedures

PWhile operational semantics State : (VarZ)  (VarVar{null})  x = y  s =  x = *y  s =  *x = y  s =  x = null  s =  x = &y  s =

PWhile operational semantics State : (VarZ)  (VarVar{null})  x = y  s = s[xs(y)]  x = *y  s = s[xs(s(y))]  *x = y  s = s[s(x)s(y)]  x = null  s = s[xnull]  x = &y  s = s[xy] must say what happens if null is dereferenced

PWhile collecting semantics CS[u] = set of concrete states that can reach program point u (CFG node)

Ideal PT Analysis: formal definition Let u denote a node in the CFG Define IdealMustPT(u) to be { (p,x) | forall s in CS[u]. s(p) = x } Define IdealMayPT(u) to be { (p,x) | exists s in CS[u]. s(p) = x }

May-point-to analysis: formal Requirement specification For every vertex u in the CFG, compute a set R(u) such that R(u)  { (p,x) | $sÎCS[u]. s(p) = x } Compute R: V -> 2Vars’ such that R(u)  IdealMayPT(u) (where Var’ = Var U {null}) May Point-To Analysis

May-point-to analysis: formal Requirement specification Compute R: V -> 2Vars’ such that R(u)  IdealMayPT(u) An algorithm is said to be correct if the solution R it computes satisfies "uÎV. R(u)  IdealMayPT(u) An algorithm is said to be precise if the solution R it computes satisfies "uÎV. R(u) = IdealMayPT(u) An algorithm that computes a solution R1 is said to be more precise than one that computes a solution R2 if "uÎV. R1(u) Í R2(u)

(May-point-to analysis) Algorithm A Is this algorithm correct? Is this algorithm precise? Let’s first completely and formally define the algorithm

Points-to graphs a x p c y b x = &a; y = &b; if (?) { p = &x; } else { p = &y; } *x = &c; *p = &c; {x=&a  y=&b  (p=&xp=&y)} {x=&a  y=&b  (p=&xp=&y)  a=&c} {(x=&ax=&c)  (y=&by=&c)  (p=&xp=&y)} a x c b y p The points-to set of x

Algorithm A: A formal definition the “Data Flow Analysis” Recipe Define join-semilattice of abstract-values PTGraph ::= (Var, VarVar’) g1  g2 = ?  = ?  = ? Define transformers for primitive statements stmt# : PTGraph  PTGraph

Algorithm A: A formal definition the “Data Flow Analysis” Recipe Define join-semilattice of abstract-values PTGraph ::= (Var, VarVar’) g1  g2 = (Var, E1  E2)  = (Var, {})  = (Var, VarVar’) Define transformers for primitive statements stmt# : PTGraph  PTGraph

Algorithm A: transformers Abstract transformers for primitive statements  stmt # : PTGraph  PTGraph  x := y # (Var, E) = ?  x := null # (Var, E) = ?  x := &y # (Var, E) = ?  x := *y # (Var, E) = ?  *x := &y # (Var, E) = ?

Algorithm A: transformers Abstract transformers for primitive statements  stmt # : PTGraph  PTGraph  x := y # (Var, E) = (Var, E[succ(x)=succ(y)]  x := null # (Var, E) = (Var, E[succ(x)={null}]  x := &y # (Var, E) = (Var, E[succ(x)={y}]  x := *y # (Var, E) = (Var, E[succ(x)=succ(succ(y))]  *x := &y # (Var, E) = ???

Correctness & precision We have a complete & formal definition of the problem We have a complete & formal definition of a proposed solution How do we reason about the correctness & precision of the proposed solution?

Points-to analysis (abstract interpretation) MayPT(u) a Í a CS(u) IdealMayPT(u) 2State PTGraph a(Y) = { (p,x) | exists s in Y. s(p) = x } IdealMayPT (u) = a ( CS(u) )

Concrete transformers CS[stmt] : State  State  x = y  s = s[xs(y)]  x = *y  s = s[xs(s(y))]  *x = y  s = s[s(x)s(y)]  x = null  s = s[xnull]  x = &y  s = s[xy] CS*[stmt] : 2State  2State CS*[st] X = { CS[st]s | s Î X }

Shape Analysis

Shape Analysis Automatically verify properties of programs manipulating dynamically allocated storage Identify all possible shapes (layout) of the heap

Sequential Stack Want to Verify No Null Dereference void push (int v) { Node x = malloc(sizeof(Node)); x->d = v; x->n = Top; Top = x; } int pop() { if (Top == NULL) return EMPTY; Node *s = Top->n; int r = Top->d; Top = s; return r; } Want to Verify No Null Dereference Underlying list remains acyclic after each operation

Shape Analysis via 3-valued Logic 1) Abstraction 3-valued logical structure canonical abstraction 2) Transformers via logical formulae soundness by construction embedding theorem, [SRW02]

Concrete State represent a concrete state as a two-valued logical structure Individuals = heap allocated objects Unary predicates = object properties Binary predicates = relations parametric vocabulary n n Top u1 u2 u3 (storeless, no heap addresses)

Concrete State S = <U,  > over a vocabulary P U – universe  - interpretation, mapping each predicate from p to its truth value in S Top n u1 u2 u3 U = { u1, u2, u3} P = { Top, n } (n)(u1,u2) = 1, (n)(u1,u3)=0, (n)(u2,u1)=0,… (Top)(u1)=1, (Top)(u2)=0, (Top)(u3)=0

Formulae for Observing Properties void push (int v) { Node x = malloc(sizeof(Node)); xd = v; xn = Top; Top = x; } Top u1 u2 u3 Top != null w: x(w) w:Top(w) 1 w: x(w) No node precedes Top v1,v2: n(v1, v2) Top(v2) 1 No Cycles 1 v1,v2: n(v1, v2)  n*(v2, v1) v1,v2: n(v1, v2)  n*(v2, v1) v1,v2: n(v1, v2) Top(v2)

Concrete Interpretation Rules Statement Update formula x =NULL x’(v)= 0 x= malloc() x’(v) = IsNew(v) x=y x’(v)= y(v) x=y next x’(v)= w: y(w)  n(w, v) x next=y n’(v, w) = (x(v) n(v, w))  (x(v)  y(w))

Example: s = Topn Top Top s u1 u2 u3 u1 u2 u3 s’(v) = v1: Top(v1)  n(v1,v) u1 u2 u3 Top u1 1 u2 u3 n u1 u2 U3 1 u3 Top u1 1 u2 u3 n u1 u2 U3 1 u3 s u1 u2 u3 s u1 u2 1 u3

Collecting Semantics  { st(w)(S) | S  CSS[w] }  if v = entry { <,> }  { st(w)(S) | S  CSS[w] }  (w,v)  E(G), w  Assignments(G)  { S | S  CSS[w] }  (w,v)  E(G), w  Skip(G) CSS [v] =  { S | S  CSS[w] and S  cond(w)}  othrewise (w,v)  True-Branches(G)  { S | S  CSS[w] and S  cond(w)} (w,v)  False-Branches(G)

Collecting Semantics At every program point – a potentially infinite set of two-valued logical structures Representing (at least) all possible heaps that can arise at the program point Next step: find a bounded abstract representation

3-Valued Logic 1 = true 0 = false 1/2 = unknown A join semi-lattice, 0  1 = 1/2 1/2 information order 1 logical order

3-Valued Logical Structures A set of individuals (nodes) U Relation meaning Interpretation of relation symbols in P p0()  {0,1, 1/2} p1(v)  {0,1, 1/2} p2(u,v)  {0,1, 1/2} A join semi-lattice: 0  1 = 1/2

Boolean Connectives [Kleene]

Property Space 3-struct[P] = the set of 3-valued logical structures over a vocabulary (set of predicates) P Abstract domain (3-Struct[P])  is  We will see alternatives later (maybe)

Embedding Order Given two structures S = <U, >, S’ = <U’, ’> and an onto function f : U  U’ mapping individuals in U to individuals in U’ We say that f embeds S in S’ (denoted by S  S’) if for every predicate symbol p  P of arity k: u1, …, uk  U, (p)(u1, ..., uk)  ’(p)(f(u1), ..., f (uk)) and for all u’  U’ ( | { u | f (u) = u’ } | > 1)  ’(sm)(u’) We say that S can be embedded in S’ (denoted by S  S’) if there exists a function f such that S f S’

Tight Embedding S’ = <U’, ’> is a tight embedding of S=< U,  > with respect to a function f if: S’ does not lose unnecessary information ’(u’1,…, u’k) = {(u1 ..., uk) | f(u1)=u’1,..., f(uk)=u’k} One way to get tight embedding is canonical abstraction

Canonical Abstraction Top u1 u1 u2 u2 u3 u3 [Sagiv, Reps, Wilhelm, TOPLAS02]

Canonical Abstraction Top u1 u1 u2 u2 u3 u3 Top [Sagiv, Reps, Wilhelm, TOPLAS02]

Canonical Abstraction Top u1 u1 u2 u2 u3 u3 Top

Canonical Abstraction Top u1 u1 u2 u2 u3 u3 Top

Canonical Abstraction Top u1 u1 u2 u2 u3 u3 Top

Canonical Abstraction Top u1 u1 u2 u2 u3 u3 Top

Canonical Abstraction () Merge all nodes with the same unary predicate values into a single summary node Join predicate values  ’(u’1 ,..., u’k) =  { (u1 ,..., uk) | f(u1)=u’1 ,..., f(uk)=u’k } Converts a state of arbitrary size into a 3-valued abstract state of bounded size (C) =  { (c) | c  C }

Information Loss Top n Canonical abstraction Top Top n Top Top Top n

Instrumentation Predicates Record additional derived information via predicates rx(v) = v1: x(v1)  n*(v1,v) c(v) = v1: n(v1, v)  n*(v, v1) Canonical abstraction Top n Top c Top n c Top

Embedding Theorem: Conservatively Observing Properties Top rTop rTop No Cycles No cycles (derived) 1/2 1 v1,v2: n(v1, v2)  n*(v2, v1) v:c(v)

Operational Semantics void push (int v) { Node x = malloc(sizeof(Node)); x->d = v; x->n = Top; Top = x; } Top n u1 u2 u3 int pop() { if (Top == NULL) return EMPTY; Node *s = Top->n; int r = Top->d; Top = s; return r; } s’(v) = v1: Top(v1)  n(v1,v) s = Top->n Top n u1 u2 u3 s

? Abstract Semantics s = Topn s’(v) = v1: Top(v1)  n(v1,v) Top Top rTop s Top rTop rTop s’(v) = v1: Top(v1)  n(v1,v) s = Top->n

Best Transformer (s = Topn) rTop Top rTop Concrete Semantics Top s rTop Top rTop s’(v) = v1: Top(v1)  n(v1,v) . . Canonical Abstraction  ? Top s rTop Abstract Semantics Top s rTop Top rTop rTop

Semantic Reduction Improve the precision of the analysis by recovering properties of the program semantics A Galois connection (C, , , A) An operation op:AA is a semantic reduction when lL2 op(l)l and (op(l)) = (l)  l op  C A

The Focus Operation Focus: Formula((3-Struct)  (3-Struct)) Generalizes materialization For every formula  Focus()(X) yields structure in which  evaluates to a definite values in all assignments Only maximal in terms of embedding Focus() is a semantic reduction But Focus()(X) may be undefined for some X

Partial Concretization Based on Transformer (s=Topn) rTop Top rTop Abstract Semantics Top rTop Top s rTop s’(v) = v1: Top(v1)  n(v1,v) Canonical Abstraction Focus (Topn) Partial Concretization u: top(u) n(u, v) Top s rTop Abstract Semantics Top rTop Top rTop s

Partial Concretization Locally refine the abstract domain per statement Soundness is immediate Employed in other shape analysis algorithms [Distefano et.al., TACAS’06, Evan et.al., SAS’07, POPL’08]

The Coercion Principle Another Semantic Reduction Can be applied after Focus or after Update or both Increase precision by exploiting some structural properties possessed by all stores (Global invariants) Structural properties captured by constraints Apply a constraint solver

Apply Constraint Solver  Top rTop Top rTop x Top rTop x x rx rx,ry n y x rx rx,ry n y

Sources of Constraints Properties of the operational semantics Domain specific knowledge Instrumentation predicates User supplied

Example Constraints x(v1) x(v2)eq(v1, v2) n(v, v1) n(v,v2)eq(v1, v2) n(v1, v) n(v2,v)eq(v1, v2)is(v) n*(v3, v4)t[n](v1, v2)

Abstract Transformers: Summary Kleene evaluation yields sound solution Focus is a statement-specific partial concretization Coerce applies global constraints

Abstract Semantics  { t_embed(coerce(st(w)3(focusF(w)(SS[w] ))))  if v = entry { <,> }  { t_embed(coerce(st(w)3(focusF(w)(SS[w] ))))  (w,v)  E(G), w  Assignments(G)  { S | S  SS[w] }  othrewise (w,v)  E(G), w  Skip(G) SS [v] =  { t_embed(S) | S  coerce(st(w)3(focusF(w)(SS[w] ))) and S 3 cond(w)}  (w,v)  True-Branches(G)  { t_embed(S) | S  coerce(st(w)3(focusF(w)(SS[w] ))) and S 3 cond(w)}  (w,v)  False-Branches(G)

Recap Abstraction Transformers canonical abstraction recording derived information Transformers partial concretization (focus) constraint solver (coerce) sound information extraction

Stack Push … v: x(v) v: x(v) v:c(v) v1,v2: n(v1, v2) Top(v2) emp … Top void push (int v) { Node x = alloc(sizeof(Node)); xd = v; xn = Top; Top = x; } x x Top v: x(v) v: x(v) x x Top x Top x Top v1,v2: n(v1, v2) Top(v2) Top v:c(v) Top