Fall Compiler Principles Lecture 10: Global Optimizations

Slides:



Advertisements
Similar presentations
Continuing Abstract Interpretation We have seen: 1.How to compile abstract syntax trees into control-flow graphs 2.Lattices, as structures that describe.
Advertisements

Data-Flow Analysis II CS 671 March 13, CS 671 – Spring Data-Flow Analysis Gather conservative, approximate information about what a program.
Control-Flow Graphs & Dataflow Analysis CS153: Compilers Greg Morrisett.
Data-Flow Analysis Framework Domain – What kind of solution is the analysis looking for? Ex. Variables have not yet been defined – Algorithm assigns a.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Dataflow Analysis Introduction Guo, Yao Part of the slides are adapted from.
Program Representations. Representing programs Goals.
Compilation (Semester A, 2013/14) Lecture 11: Data Flow Analysis & Optimizations Noam Rinetzky Slides credit: Roman Manevich, Mooly Sagiv and.
Compilation (Semester A, 2013/14) Lecture 12: Abstract Interpretation Noam Rinetzky Slides credit: Roman Manevich, Mooly Sagiv and Eran Yahav.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Partial Redundancy Elimination Guo, Yao.
1 Data flow analysis Goal : collect information about how a procedure manipulates its data This information is used in various optimizations For example,
Foundations of Data-Flow Analysis. Basic Questions Under what circumstances is the iterative algorithm used in the data-flow analysis correct? How precise.
Common Sub-expression Elim Want to compute when an expression is available in a var Domain:
CS 536 Spring Global Optimizations Lecture 23.
Global optimization. Data flow analysis To generate better code, need to examine definitions and uses of variables beyond basic blocks. With use- definition.
Data Flow Analysis Compiler Design Nov. 3, 2005.
From last time: reaching definitions For each use of a variable, determine what assignments could have set the value being read from the variable Information.
4/25/08Prof. Hilfinger CS164 Lecture 371 Global Optimization Lecture 37 (From notes by R. Bodik & G. Necula)
Data Flow Analysis Compiler Design October 5, 2004 These slides live on the Web. I obtained them from Jeff Foster and he said that he obtained.
CS 412/413 Spring 2007Introduction to Compilers1 Lecture 29: Control Flow Analysis 9 Apr 07 CS412/413 Introduction to Compilers Tim Teitelbaum.
Data Flow Analysis Compiler Design Nov. 8, 2005.
1 Copy Propagation What does it mean? – Given an assignment x = y, replace later uses of x with uses of y, provided there are no intervening assignments.
Prof. Fateman CS 164 Lecture 221 Global Optimization Lecture 22.
U NIVERSITY OF M ASSACHUSETTS, A MHERST D EPARTMENT OF C OMPUTER S CIENCE Advanced Compilers CMPSCI 710 Spring 2003 Data flow analysis Emery Berger University.
Machine-Independent Optimizations Ⅰ CS308 Compiler Theory1.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Direction of analysis Although constraints are not directional, flow functions are All flow functions we have seen so far are in the forward direction.
Prof. Bodik CS 164 Lecture 16, Fall Global Optimization Lecture 16.
Precision Going back to constant prop, in what cases would we lose precision?
1 Data-Flow Analysis Proving Little Theorems Data-Flow Equations Major Examples.
Solving fixpoint equations
MIT Introduction to Program Analysis and Optimization Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology.
Λλ Fernando Magno Quintão Pereira P ROGRAMMING L ANGUAGES L ABORATORY Universidade Federal de Minas Gerais - Department of Computer Science P ROGRAM A.
1 Code optimization “Code optimization refers to the techniques used by the compiler to improve the execution efficiency of the generated object code”
Compiler Principles Winter Compiler Principles Global Optimizations Mayer Goldberg and Roman Manevich Ben-Gurion University.
Jeffrey D. Ullman Stanford University. 2 boolean x = true; while (x) {... // no change to x }  Doesn’t terminate.  Proof: only assignment to x is at.
Compiler Principles Fall Compiler Principles Lecture 0: Local Optimizations Roman Manevich Ben-Gurion University.
Compilation Lecture 8 Abstract Interpretation Noam Rinetzky 1.
Compiler Principles Fall Compiler Principles Lecture 11: Loop Optimizations Roman Manevich Ben-Gurion University.
Compilation Lecture 7 IR + Optimizations Noam Rinetzky 1.
Data Flow Analysis II AModel Checking and Abstract Interpretation Feb. 2, 2011.
Iterative Dataflow Problems Taken largely from notes of Alex Aiken (UC Berkeley) and Martin Rinard (MIT) Dataflow information used in optimization Several.
Optimization Simone Campanoni
Dataflow Analysis CS What I s Dataflow Analysis? Static analysis reasoning about flow of data in program Different kinds of data: constants, variables,
Compiler Principles Fall Compiler Principles Lecture 9: Dataflow & Optimizations 2 Roman Manevich Ben-Gurion University of the Negev.
Compiler Principles Fall Compiler Principles Lecture 8: Dataflow & Optimizations 1 Roman Manevich Ben-Gurion University of the Negev.
Code Optimization Data Flow Analysis. Data Flow Analysis (DFA)  General framework  Can be used for various optimization goals  Some terms  Basic block.
DFA foundations Simone Campanoni
Data Flow Analysis Suman Jana
Fall Compiler Principles Lecture 7: Dataflow & Optimizations 2
Lecture 5 Partial Redundancy Elimination
Simone Campanoni DFA foundations Simone Campanoni
Fall Compiler Principles Lecture 4: Parsing part 3
Fall Compiler Principles Lecture 6: Dataflow & Optimizations 1
Fall Compiler Principles Lecture 8: Loop Optimizations
IR + Optimizations Noam Rinetzky
Topic 10: Dataflow Analysis
Global optimizations.
University Of Virginia
1. Reaching Definitions Definition d of variable v: a statement d that assigns a value to v. Use of variable v: reference to value of v in an expression.
Abstract Interpretation Noam Rinetzky
Static Single Assignment Form (SSA)
Optimizations Noam Rinetzky
Fall Compiler Principles Lecture 10: Loop Optimizations
Data Flow Analysis Compiler Design
Lecture 20: Dataflow Analysis Frameworks 11 Mar 02
Fall Compiler Principles Lecture 6: Dataflow & Optimizations 1
Live variables and copy propagation
Fall Compiler Principles Lecture 13: Summary
CSE P 501 – Compilers SSA Hal Perkins Autumn /31/2019
Live Variables – Basic Block
Presentation transcript:

Fall 2014-2015 Compiler Principles Lecture 10: Global Optimizations Roman Manevich Ben-Gurion University

Tentative syllabus mid-term exam Front End Intermediate Representation Scanning Top-down Parsing (LL) Bottom-up Parsing (LR) Intermediate Representation Lowering Optimizations Local Optimizations Dataflow Analysis Loop Optimizations Code Generation Register Allocation Instruction Selection mid-term exam

Previously Formalisms for program analysis Basic blocks Control flow graphs Local program analyses and optimizations Available expressions  common sub-expression elimination + copy propagation Live variables  dead code elimination

agenda Dataflow analysis introduction Constant Propagation

Challenges of Global analysis

Global analysis A global analysis is an analysis that works on a control-flow graph as a whole Substantially more powerful than a local analysis (Why?) Substantially more complicated than a local analysis

Local vs. global analysis Many of the optimizations from local analysis can still be applied globally Common sub-expression elimination Copy propagation Dead code elimination Certain optimizations are possible in global analysis that aren't possible locally: Code motion: moving code from one basic block into another to avoid computing values unnecessarily Today: Global constant propagation

Loop invariant code motion example w := x – y; while (t < 120) { if (t>2) z := z + 1; z := z + w; ++t; } while (t < 120) { if (t>2) z := z + 1; z := z + x - y; ++t; } value of expression x - y is not changed by loop body

Global analysis technical challenges Need to be able to handle multiple predecessors/successors for a basic block Join operator Need to be able to handle multiple paths through the control-flow graph may need to iterate multiple times to compute final value but the analysis still needs to terminate! Chaotic iteration (fixed point iteration) Need to be able to assign each basic block a reasonable default value for before we've analyzed it Bottom value

Global analysis technical challenges Need to be able to handle multiple predecessors/successors for a basic block Join operator Need to be able to handle multiple paths through the control-flow graph may need to iterate multiple times to compute final value but the analysis still needs to terminate! Chaotic iteration (fixed point iteration) Need to be able to assign each basic block a reasonable default value for before we've analyzed it Bottom value

Global liveness analysis

Global dead code elimination Local dead code elimination needed to know what variables were live on exit from a basic block This information can only be computed as part of a global analysis How do we extend our liveness analysis to handle a CFG?

Global liveness analysis: CFGs without loops

CFGs without loops b := c + d; e := c + d; Entry x := c + d; a := b + c; y := a + b; x := a + b; y := c + d; Exit

CFGs without loops ? {a, c, d} b := c + d; e := c + d; Entry Which variables may be live on some execution path? Entry {a, b, c, d} ? x := c + d; a := b + c; {b, c, d} {a, b, c, d} y := a + b; {a, b, c, d} {a, b, c, d} x := a + b; y := c + d; {a, b, c, d} {x, y} {x, y} Exit

CFGs without loops {a, c, d} b := c + d; e := c + d; Entry {a, b, c, d} {b, c, d} x := c + d; a := b + c; {a, b, c, d} y := a + b; {a, b, c, d} {a, b, c, d} x := a + b; y := c + d; {a, b, c, d} {x, y} {x, y} Exit

CFGs without loops b := c + d; Entry a := b + c; x := a + b; y := c + d; Exit

CFGs without loops b := c + d; Entry a := b + c; x := a + b; y := c + d; Exit

Major changes – part 1 In a local analysis, each statement has exactly one predecessor In a global analysis, each statement may have multiple predecessors A global analysis must have some means of combining information from all predecessors of a basic block

CFGs without loops b := c + d; e := c + d; {c, d} Entry {b, c, d} Need to combine currently-computed value with new value Entry {b, c, d} x := c + d; a := b + c; {b, c, d} {a, b, c, d} y := a + b; {a, b, c, d} {a, b, c, d} x := a + b; y := c + d; {a, b, c, d} {x, y} {x, y} Exit

CFGs without loops {c, d} b := c + d; e := c + d; Entry {a, b, c, d} x := c + d; a := b + c; {a, b, c, d} y := a + b; {a, b, c, d} {a, b, c, d} x := a + b; y := c + d; {a, b, c, d} {x, y} {x, y} Exit

CFGs without loops {a, c, d} b := c + d; e := c + d; Entry {a, b, c, d} {b, c, d} x := c + d; a := b + c; {a, b, c, d} y := a + b; {a, b, c, d} {a, b, c, d} x := a + b; y := c + d; {a, b, c, d} {x, y} {x, y} Exit

Global liveness analysis: handling loops

Major changes – part 2 In a local analysis, there is only one possible path through a basic block In a global analysis, there may be many paths through a CFG May need to recompute values multiple times as more information becomes available Need to be careful when doing this not to loop infinitely! (More on that later)

CFGs with loops Until now we considered loop-free CFGs, which have only finitely many possible paths With loops in the picture, this is no longer true Not all possible loops in a CFG can be realized in the actual program IfZ x Goto Top x := 1; Top: x := 0; x := 2;

CFGs with loops Until now we considered loop-free CFGs, which have only finitely many possible paths With loops in the picture, this is no longer true Not all possible loops in a CFG can be realized in the actual program Sound approximation: Assume that every possible path through the CFG corresponds to a valid execution Includes all realizable paths, but some additional paths as well May make our analysis less precise (but still sound) since we ignore conditions

CFGs with loops b := c + d; c := c + d; IfZ ... Entry a := b + c; d := a + c; c := a + b; a := a + b; d := b + c; Exit {a}

How do we use join here? b := c + d; c := c + d; IfZ ... Entry a := b + c; d := a + c; c := a + b; a := a + b; d := b + c; ? Exit {a}

Major changes – part 3 In a local analysis, there is always a well defined “first” statement to begin processing In a global analysis with loops, every basic block might depend on every other basic block To fix this, we need to assign initial values to all of the blocks in the CFG

CFGs with loops - initialization {} b := c + d; c := c + d; Entry a := b + c; d := a + c; {} {} c := a + b; a := a + b; d := b + c; {} Exit {a}

CFGs with loops - iteration {} b := c + d; c := c + d; Entry {} a := b + c; d := a + c; {} c := a + b; a := a + b; d := b + c; {} {a} Exit {a}

CFGs with loops - iteration {} b := c + d; c := c + d; Entry {} a := b + c; d := a + c; {} c := a + b; a := a + b; d := b + c; {a, b, c} {a} Exit {a}

CFGs with loops - iteration {} b := c + d; c := c + d; Entry {} a := b + c; d := a + c; {} c := a + b; {a, b, c} a := a + b; d := b + c; {a, b, c} {a} Exit {a}

CFGs with loops - iteration {} b := c + d; c := c + d; Entry {b, c} a := b + c; d := a + c; {} c := a + b; {a, b, c} a := a + b; d := b + c; {a, b, c} {a} Exit {a}

CFGs with loops - iteration {} b := c + d; c := c + d; Entry {b, c} {b, c} a := b + c; d := a + c; {} c := a + b; {a, b, c} a := a + b; d := b + c; {a, b, c} {a} Exit {a}

CFGs with loops - iteration {c, d} b := c + d; c := c + d; Entry {b, c} a := b + c; d := a + c; {b, c} {} c := a + b; {a, b, c} {a, b, c} a := a + b; d := b + c; {a, b, c} {a} Exit {a}

CFGs with loops - iteration {c, d} b := c + d; c := c + d; Entry {b, c} a := b + c; d := a + c; {b, c} {a, b} c := a + b; {a, b, c} {a, b, c} a := a + b; d := b + c; {a, b, c} {a} Exit {a}

CFGs with loops - iteration {c, d} b := c + d; c := c + d; Entry {b, c} a := b + c; d := a + c; {b, c} {a, b} c := a + b; {a, b, c} {a, b, c} a := a + b; d := b + c; {a, b, c} {a} Exit {a}

CFGs with loops - iteration {c, d} b := c + d; c := c + d; Entry {b, c} a := b + c; d := a + c; {b, c} {a, b} c := a + b; {a, b, c} {a, b, c} a := a + b; d := b + c; {a, b, c} {a, c, d} Exit {a}

CFGs with loops - iteration {c, d} b := c + d; c := c + d; Entry {b, c} a := b + c; d := a + c; {b, c} {a, b} c := a + b; {a, b, c} {a, b, c} a := a + b; d := b + c; {a, b, c} {a, c, d} Exit {a}

CFGs with loops - iteration {c, d} b := c + d; c := c + d; Entry {b, c} a := b + c; d := a + c; {b, c} {a, b} c := a + b; {a, b, c} {a, b, c} a := a + b; d := b + c; {a, b, c} {a, c, d} Exit {a}

CFGs with loops - iteration {c, d} b := c + d; c := c + d; Entry {b, c} a := b + c; d := a + c; {b, c} {a, b} c := a + b; {a, b, c} {a, b, c} a := a + b; d := b + c; {a, b, c} {a, c, d} Exit {a}

CFGs with loops - iteration {c, d} b := c + d; c := c + d; Entry {a, b, c} a := b + c; d := a + c; {b, c} {a, b} c := a + b; {a, b, c} {a, b, c} a := a + b; d := b + c; {a, b, c} {a, c, d} Exit {a}

CFGs with loops - iteration {a, c, d} b := c + d; c := c + d; Entry {a, b, c} a := b + c; d := a + c; {b, c} {a, b} c := a + b; {a, b, c} {a, b, c} a := a + b; d := b + c; {a, b, c} {a, c, d} Exit {a}

CFGs with loops – fixed point {a, c, d} b := c + d; c := c + d; Entry {a, b, c} a := b + c; d := a + c; {b, c} {a, b} c := a + b; {a, b, c} {a, b, c} a := a + b; d := b + c; {a, b, c} {a, c, d} Exit {a}

formalizing Global liveness analysis

Global liveness analysis Initially, set IN[s] = { } for each statement s Set IN[exit] to the set of variables known to be live on exit Language-specific knowledge Repeat until no changes occur: For each statement s of the form a := b + c, in any order you'd like: Set OUT[s] to set union of IN[p] for each successor p of s Set IN[s] to (OUT[s] – a)  {b, c}. Yet another fixed-point iteration!

Global liveness analysis a:=b+c IN[s]=(OUT[s] – {a})  {b, c} OUT[s]=IN[s2]  IN[s3] IN[s2] IN[s3] s2 s3

Why does this work? To show correctness, we need to show that The algorithm eventually terminates, and When it terminates, it has a sound answer

Termination argument Once a variable is discovered to be live during some point of the analysis, it always stays live Only finitely many variables and finitely many places where a variable can become live

Soundness argument (sketch) Each individual rule, applied to some set, correctly updates liveness in that set When computing the union of the set of live variables, a variable is only live if it was live on some path leaving the statement So, locally, every step in the algorithm is correct Does local correctness imply global correctness? Yes under some conditions Monotone dataflow

Dataflow analysis theory

Theory to the rescue Building up all of the machinery to design this analysis was tricky The key ideas, however, are mostly independent of the analysis: We need to be able to compute functions describing the behavior of each statement We need to be able to merge several subcomputations together We need an initial value for all of the basic blocks There is a beautiful formalism that captures many of these properties: dataflow framework

Lattice theory

Join semilattices A join semilattice is a partial ordering defined on a set of elements Any two elements have some join that is the smallest element larger than both elements Intuitively the join of two elements represents combining information from two elements by an overapproximation There is a unique bottom element , which is smaller than all other elements The bottom element represents “no information yet” or “most precise value” There is also usually a top element , which is greater than all other elements “Most conservative value”

Join semilattices in program analysis The elements of a join semilattice represent (infinite) sets of states We compute one element per program location Join approximates the union of two representations of sets of states The bottom element usually stands for an empty set of states The top element stands for the set of all possible states

Join semilattices example

Join semilattice for liveness (Hasse diagram) {a, b, c} top element {a, b} {a, c} {b, c} {a} {b} {c} {} bottom element

What is the join of {b} and {c}? {a, b, c} {a, b} {a, c} {b, c} {a} {b} {c} {}

What is the join of {b} and {c}? {a, b, c} {a, b} {a, c} {b, c} {a} {b} {c} {}

What is the join of {b} and {a,c}? {a, b, c} {a, b} {a, c} {b, c} {a} {b} {c} {}

What is the join of {b} and {a,c}? {a, b, c} {a, b} {a, c} {b, c} {a} {b} {c} {}

What is the join of {a} and {a,b}? {a, b, c} {a, b} {a, c} {b, c} {a} {b} {c} {}

What is the join of {a} and {a,b}? {a, b, c} {a, b} {a, c} {b, c} {a} {b} {c} {}

Join semilattices formally

Formal definitions A join semilattice is a pair (V, ), where V is a set of elements  is a join operator that is commutative: x  y = y  x associative: (x  y)  z = x  (y  z) idempotent: x  x = x If x  y = z, we say that z is the join or (least upper bound) of x and y Every join semilattice has a bottom element denoted  such that   x = x for all x

Join semilattices and ordering Greater {a, b, c} {a, b} {a, c} {b, c} {a} {b} {c} {} Lower

Join semilattices and ordering Least precise {a, b, c} {a, b} {a, c} {b, c} {a} {b} {c} {} Most precise

Join semilattices and orderings Every join semilattice (V, ) induces an ordering relationship  over its elements Define x  y iff x  y = y Need to prove Reflexivity: x  x Antisymmetry: If x  y and y  x, then x = y Transitivity: If x  y and y  z, then x  z

More Join semilattice examples

An example join semilattice The set of natural numbers and the max function Idempotent max{a, a} = a Commutative max{a, b} = max{b, a} Associative max{a, max{b, c}} = max{max{a, b}, c} Bottom element is 0: max{0, a} = a What is the ordering over these elements?

A join semilattice for liveness Sets of live variables and the set union operation Idempotent: x  x = x Commutative: x  y = y  x Associative: (x  y)  z = x  (y  z) Bottom element: The empty set: Ø  x = x What is the ordering over these elements?

Dataflow framework

Semilattices and program analysis Semilattices naturally solve many of the problems we encounter in global analysis How do we combine information from multiple basic blocks? Take the join of all information from those blocks What value do we give to basic blocks we haven't seen yet? Use the bottom element How do we know that the algorithm always terminates? Actually, we still don't! More on that later

Semilattices and program analysis Semilattices naturally solve many of the problems we encounter in global analysis How do we combine information from multiple basic blocks? Take the join of all information from those blocks What value do we give to basic blocks we haven't seen yet? Use the bottom element How do we know that the algorithm always terminates? Actually, we still don't! More on that later

A general framework A global analysis is a tuple (D, V, , F, I) D is a direction (forward or backward) The order to visit statements within a basic block, not the order in which to visit the basic blocks V is a set of values (abstract domain)  is a join operator over those values F is a set of transfer functions f : V  V One per statement I is an initial value

Running global analyses Assume that (D, V, , F, I) is a forward analysis Set OUT[s] =  for all statements s Set OUT[entry] = I Repeat until no values change: For each statement s with predecessors p1, p2, … , pn: Set IN[s] = OUT[p1]  OUT[p2]  …  OUT[pn] Set OUT[s] = fs (IN[s]) The order of this iteration does not matter: Chaotic iteration

For comparison Set OUT[s] =  for all statements s Set OUT[entry] = I Repeat until no values change: For each statement s with predecessors p1, … , pn: Set IN[s] = OUT[p1]  …  OUT[pn] Set OUT[s] = fs (IN[s]) Set IN[s] = {} for all statements s Set OUT[exit] = the set of variables known to be live on exit Repeat until no values change: For each statement s of the form a:=b+c: Set OUT[s] = set union of IN[x] for each successor x of s Set IN[s] = (OUT[s]\{a})  {b,c}

The dataflow framework This form of analysis is called the dataflow framework Can be used to easily prove an analysis is sound With certain restrictions, can be used to prove that an analysis eventually terminates Again, more on that later

Dataflow analysis example: global Constant propagation

Global constant propagation Constant propagation is an optimization that replaces each variable that is known to be a constant value with that constant An elegant example of the dataflow framework

Global constant propagation entry x := 6; y := x; z := y; w := x; z := x; exit x := 4;

Global constant propagation entry x := 6; y := x; z := y; w := x; z := x; exit x := 4;

Global constant propagation entry x := 6; y := 6; z := y; w := 6; z := x; exit x := 4;

Constant propagation analysis In order to do a constant propagation, we need to track what values might be assigned to a variable at each program point Every variable will either Never have a value assigned to it, Have a single constant value assigned to it, Have two or more constant values assigned to it, or Have a known non-constant value Our analysis will propagate this information throughout a CFG to identify locations where a value is constant

Properties of constant propagation For now, consider just some single variable x At each point in the program, we know one of three things about the value of x: We have never seen a value for x x is definitely a constant and has value k x is not a constant, since it's been assigned two values or assigned a value that we know isn't a constant Note that the first and last of these are not the same! The last one means that there may be a way for x to have multiple values The first one means that x never had a value at all

Defining a join operator The join of any two different constants is Not-a-Constant (If the variable might have two different values on entry to a statement, it cannot be a constant) The join of Not a Constant and any other value is Not-a-Constant (If on some path the value is known not to be a constant, then on entry to a statement its value can't possibly be a constant) The join of Undefined and any other value is that other value (If x has no value on some path and does have a value on some other path, we can just pretend it always had the assigned value)

A semilattice for constant propagation One possible semilattice for this analysis is shown here (for each variable): Not-a-constant ... -2 -1 1 2 ... Undefined The lattice is infinitely wide

A semilattice for constant propagation One possible semilattice for this analysis is shown here (for each variable): Not-a-constant ... -2 -1 1 2 ... Undefined Note: The join of any two different constants is Not-a-Constant The join of Not a Constant and any other value is Not-a-Constant The join of Undefined and any other value is that other value

A semilattice for constant propagation One possible semilattice for this analysis is shown here (for each variable):  ... -2 -1 1 2 ...  Note: The join of any two different constants is  The join of and any other value is  The join of  and any other value is that other value

Running global constant propagation example

Input CFG x := 6; entry y := x; z := y; w := x; z := x; x := 4; exit

Setting initial values x := 6; x=y=z=w= entry x=y=z=w= y := x; x=y=z=w= z := y; x=y=z=w= w := x; x=y=z=w= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 1 x := 6; x=y=z=w= entry x=y=z=w= y := x; x=y=z=w= z := y; x=y=z=w= w := x; x=y=z=w= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 1 x=y=z=w= x := 6; x=y=z=w= entry x=y=z=w= y := x; x=y=z=w= z := y; x=y=z=w= w := x; x=y=z=w= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 1 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= y := x; x=y=z=w= z := y; x=y=z=w= w := x; x=y=z=w= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 2 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= y := x; x=y=z=w= z := y; x=y=z=w= w := x; x=y=z=w= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 2 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=z=w= z := y; x=y=z=w= w := x; x=y=z=w= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 2 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= z := y; x=y=z=w= w := x; x=y=z=w= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 3 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= z := y; x=y=z=w= w := x; x=y=z=w= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 3 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=y=z=w= w := x; x=y=z=w= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 3 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= w := x; x=y=z=w= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 4 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= w := x; x=y=z=w= y=6  y= gives what? z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 4 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=y=z=w= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 4 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 5 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= x=6  x= gives what? Y=  y= gives what? z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 5 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= x=w=6 y=z= z := x; x=y=z=w= x := 4; x=y=z=w= exit

Iteration 5 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= x=w=6 y=z= z := x; x=w=z=6 y= x := 4; x=y=z=w= exit

Iteration 6 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= x=w=6 y=z= z := x; x=w=z=6 y= x=w=z=6 y= x := 4; x=y=z=w= exit

Iteration 6 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= x=w=6 y=z= z := x; x=w=z=6 y= x=w=z=6 y= x := 4; x=4 w=z=6 y= exit

Iteration 7 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= x=w=6 y=z= z := x; x=w=z=6 y= x=w=z=6 y= x := 4; x=4 w=z=6 y= exit

Iteration 7 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; x=w=z=6 y= x=w=z=6 y= x := 4; x=4 w=z=6 y= exit

Iteration 7 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= x=w=z=6 y= x := 4; x=4 w=z=6 y= exit

Iteration 8 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= x=w=z=6 y= x := 4; x=4 w=z=6 y= exit

Iteration 8 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= w=6 x=y=z= x := 4; x=4 w=z=6 y= exit

Iteration 8 x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= w=6 x=y=z= x := 4; x=4 w=6 y=z= exit

Iteration 9 – no change x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= w=6 x=y=z= x := 4; x=4 w=6 y=z= exit

Iteration 10 – no change x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= w=6 x=y=z= x := 4; x=4 w=6 y=z= exit

Iteration 11 – no change x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= w=6 x=y=z= x := 4; x=4 w=6 y=z= exit

Iteration 12 – no change x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= w=6 x=y=z= x := 4; x=4 w=6 y=z= exit

Iteration 13 – no change x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= w=6 x=y=z= x := 4; x=4 w=6 y=z= exit

Iteration 14 – no change x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= w=6 x=y=z= x := 4; x=4 w=6 y=z= exit

Analysis fixed point x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= w=6 x=y=z= x := 4; x=4 w=6 y=z= exit

Transformation opportunities x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := x; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := x; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= w=6 x=y=z= x := 4; x=4 w=6 y=z= exit

Transformed CFG x=y=z=w= x := 6; x=6 y=z=w= entry x=y=z=w= x=6 y=z=w= y := 6; x=y=6 z=w= x=6 y=z=w= z := y; x=6 y=z=w= x=6 y=z=w= w := 6; x=w=6 y=z= w=6 x=y=z= z := x; w=6 x=y=z= w=6 x=y=z= x := 4; x=4 w=6 y=z= exit

Dataflow for constant propagation Direction: Forward Semilattice: Vars {, 0, 1, -1, 2, -2, …, } Join mapping for variables point-wise {x1,y1,z1}  {x1,y2,z} = {x1,y,z} Transfer functions: fx=k(V) = V[xk] (update V by mapping x to k) fx=y(V) = V[xV(y)] (update val. of x to val. of y) fx=a+b(V) = V[x] (assign Not-a-Constant) Initial value: x is Not-a-Constant

Next lecture: Loop Optimizations and Register Allocation