X := 11; if (x == 11) { DoSomething(); } else { DoSomethingElse(); x := x + 1; } y := x; // value of y? Phase ordering problem Optimizations can interact.

Slides:



Advertisements
Similar presentations
SSA and CPS CS153: Compilers Greg Morrisett. Monadic Form vs CFGs Consider CFG available exp. analysis: statement gen's kill's x:=v 1 p v 2 x:=v 1 p v.
Advertisements

Data-Flow Analysis II CS 671 March 13, CS 671 – Spring Data-Flow Analysis Gather conservative, approximate information about what a program.
Optimizing Compilers for Modern Architectures Allen and Kennedy, Chapter 13 Compiling Array Assignments.
Lecture 11: Code Optimization CS 540 George Mason University.
Chapter 9 Code optimization Section 0 overview 1.Position of code optimizer 2.Purpose of code optimizer to get better efficiency –Run faster –Take less.
1 CS 201 Compiler Construction Lecture 3 Data Flow Analysis.
Architecture-dependent optimizations Functional units, delay slots and dependency analysis.
Control-Flow Graphs & Dataflow Analysis CS153: Compilers Greg Morrisett.
Static Single Assignment CS 540. Spring Efficient Representations for Reachability Efficiency is measured in terms of the size of the representation.
Data-Flow Analysis Framework Domain – What kind of solution is the analysis looking for? Ex. Variables have not yet been defined – Algorithm assigns a.
3-Valued Logic Analyzer (TVP) Tal Lev-Ami and Mooly Sagiv.
CS412/413 Introduction to Compilers Radu Rugina Lecture 37: DU Chains and SSA Form 29 Apr 02.
Chapter 10 Code Optimization. A main goal is to achieve a better performance Front End Code Gen Intermediate Code source Code target Code user Machine-
1 1 Regression Verification for Multi-Threaded Programs Sagar Chaki, SEI-Pittsburgh Arie Gurfinkel, SEI-Pittsburgh Ofer Strichman, Technion-Haifa Originally.
Automated Soundness Proofs for Dataflow Analyses and Transformations via Local Rules Sorin Lerner* Todd Millstein** Erika Rice* Craig Chambers* * University.
6/9/2015© Hal Perkins & UW CSEU-1 CSE P 501 – Compilers SSA Hal Perkins Winter 2008.
Recap from last time We were trying to do Common Subexpression Elimination Compute expressions that are available at each program point.
Recap from last time Saw several examples of optimizations –Constant folding –Constant Prop –Copy Prop –Common Sub-expression Elim –Partial Redundancy.
Next Section: Pointer Analysis Outline: –What is pointer analysis –Intraprocedural pointer analysis –Interprocedural pointer analysis (Wilson & Lam) –Unification.
Automatically Proving the Correctness of Compiler Optimizations Sorin Lerner Todd Millstein Craig Chambers University of Washington.
CS 536 Spring Global Optimizations Lecture 23.
Correctness. Until now We’ve seen how to define dataflow analyses How do we know our analyses are correct? We could reason about each individual analysis.
Administrative info Subscribe to the class mailing list –instructions are on the class web page, click on “Course Syllabus” –if you don’t subscribe by.
Administrative info Subscribe to the class mailing list –instructions are on the class web page, which is accessible from my home page, which is accessible.
4/25/08Prof. Hilfinger CS164 Lecture 371 Global Optimization Lecture 37 (From notes by R. Bodik & G. Necula)
Another example p := &x; *p := 5 y := x + 1;. Another example p := &x; *p := 5 y := x + 1; x := 5; *p := 3 y := x + 1; ???
1 Program Analysis Mooly Sagiv Tel Aviv University Textbook: Principles of Program Analysis.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Prof. Fateman CS 164 Lecture 221 Global Optimization Lecture 22.
From last lecture x := y op z in out F x := y op z (in) = in [ x ! in(y) op in(z) ] where a op b =
Common Sub-expression Elim Want to compute when an expression is available in a var Domain:
Recap from last time We saw various different issues related to program analysis and program transformations You were not expected to know all of these.
Projects. Dataflow analysis Dataflow analysis: what is it? A common framework for expressing algorithms that compute information about a program Why.
Recap from last time: live variables x := 5 y := x + 2 x := x + 1 y := x y...
Machine-Independent Optimizations Ⅰ CS308 Compiler Theory1.
From last time: reaching definitions For each use of a variable, determine what assignments could have set the value being read from the variable Information.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Direction of analysis Although constraints are not directional, flow functions are All flow functions we have seen so far are in the forward direction.
Program Analysis Mooly Sagiv Tel Aviv University Sunday Scrieber 8 Monday Schrieber.
Composing Dataflow Analyses and Transformations Sorin Lerner (University of Washington) David Grove (IBM T.J. Watson) Craig Chambers (University of Washington)
Recap from last time We saw various different issues related to program analysis and program transformations You were not expected to know all of these.
Prof. Bodik CS 164 Lecture 16, Fall Global Optimization Lecture 16.
Precision Going back to constant prop, in what cases would we lose precision?
Optimizing Compilers Nai-Wei Lin Department of Computer Science and Information Engineering National Chung Cheng University.
Comp 245 Data Structures Software Engineering. What is Software Engineering? Most students obtain the problem and immediately start coding the solution.
Static Program Analyses of DSP Software Systems Ramakrishnan Venkitaraman and Gopal Gupta.
Λλ Fernando Magno Quintão Pereira P ROGRAMMING L ANGUAGES L ABORATORY Universidade Federal de Minas Gerais - Department of Computer Science P ROGRAM A.
1 Code optimization “Code optimization refers to the techniques used by the compiler to improve the execution efficiency of the generated object code”
Compiler Principles Fall Compiler Principles Lecture 0: Local Optimizations Roman Manevich Ben-Gurion University.
Detecting Equality of Variables in Programs Bowen Alpern, Mark N. Wegman, F. Kenneth Zadeck Presented by: Abdulrahman Mahmoud.
Program Representations. Representing programs Goals.
CS412/413 Introduction to Compilers Radu Rugina Lecture 18: Control Flow Graphs 29 Feb 02.
1 Control Flow Graphs. 2 Optimizations Code transformations to improve program –Mainly: improve execution time –Also: reduce program size Can be done.
Credible Compilation With Pointers Martin Rinard and Darko Marinov Laboratory for Computer Science Massachusetts Institute of Technology.
Code Optimization Overview and Examples
High-level optimization Jakub Yaghob
Code Optimization.
Simone Campanoni Dependences Simone Campanoni
Dataflow analysis.
Program Representations
Data Structures Recursion CIS265/506: Chapter 06 - Recursion.
Fall Compiler Principles Lecture 8: Loop Optimizations
Eugene Gavrin – MSc student
Machine-Independent Optimization
Topic 10: Dataflow Analysis
Dataflow analysis.
Static Single Assignment Form (SSA)
Fall Compiler Principles Lecture 10: Loop Optimizations
Advanced Compiler Design
CSE P 501 – Compilers SSA Hal Perkins Autumn /31/2019
Presentation transcript:

x := 11; if (x == 11) { DoSomething(); } else { DoSomethingElse(); x := x + 1; } y := x; // value of y? Phase ordering problem Optimizations can interact in mutually beneficial ways, and no order exploits all of these interactions. Classic example: constant propagation and unreachable code elimination. x := 11; DoSomething(); y := x; // value of y? x := 11; DoSomething(); y := 11; const prop followed by unreachable code elimination const prop again true

One known solution: Iterate individual analyses until the results don’t change x := 11; do { if (x == 11) { DoSomething(); } else { DoSomethingElse(); x := x + 1; } } while (...) y := x; // value of y? Compiler is slow. In the presence of loops in the source program, might not yield best possible results.

Another known solution: hand written super-analysis Lose modularity: –difficult to write, reuse, and extend such analyses Examples: –conditional constant propagation [Wegman and Zadeck 91] –class analysis, splitting and inlining [Chambers and Ungar 90] –const prop and pointer analysis [Pioli and Hind 99] Monolithic Super-Analysis

Ideally we want to: –Write analyses modularly –Exploit mutually beneficial interactions –Have a fast compiler We present a framework that achieves this. Composition Framework

The key to modular composition Traditionally, optimizations are defined in two parts: 1.A dataflow analysis. 2.Rules for transforming the program representation after the analysis is solved. The key insight is to merge these two parts: –Dataflow functions return either a dataflow value OR a replacement graph with which to replace the current statement.

Flow function returning a dataflow value y := 5

Flow function returning a dataflow value y := 5 [... ] [..., y → 5] PROPAGATE

Flow function returning a replacement graph y := x+2

[x → 3] Flow function returning a replacement graph y := x+2 [x → 3] REPLACE y := 5 Replacement graph Step 1: Initialize input edges with dataflow information

Flow function returning a replacement graph y := 5 [x → 3] PROPAGATE [x → 3, y → 5] Step 2: Perform recursive dataflow analysis on the replacement graph

Flow function returning a replacement graph y := 5 [x → 3] PROPAGATE [x → 3, y → 5] Step 3: Propagate dataflow information from output edges.

Flow function returning a replacement graph y := x+2 [x → 3] [x → 3, y → 5] Replacement graphs: –used to compute outgoing dataflow information for the current statement. Replacement graphs: –used to compute outgoing dataflow information for the current statement. –a convenient way of specifying what might otherwise be a complicated flow function.

Flow function returning a replacement graph y := x+2 [x → 3] [x → 3, y → 5] Soundness requirement: –Replacement graph must have the same concrete semantics as the original statement, but only on concrete inputs that are consistent with the current dataflow facts.

Flow function returning a replacement graph y := x+2 [x → 3] [x → 3, y → 5] Let’s assume we’ve reached a fixed point.

Flow function returning a replacement graph y := x+2 [x → 3] [x → 3, y → 5] y := 5 Let’s assume we’ve reached a fixed point.

Flow function returning a replacement graph y := 5 [x → 3] [x → 3, y → 5] Replacement graphs: –used to transform the program once a fixed point has been reached. Let’s assume we’ve reached a fixed point.

Iterative analysis example y := x+2 [x → 3, y → 5] [x → 3][x → T ] Now, let’s assume we haven’t reached a fixed point.

Iterative analysis example y := x+2 [x → 3, y → 5] PROPAGATE [x → 3][x → T ] [x → T, y → T ] Now, let’s assume we haven’t reached a fixed point.

Branch folding example if (x == 11) FT

Branch folding example if (x == 11) REPLACE [x → 11] FT

Branch folding example [x → 11]

Branch folding example if (x == 11) [x → 11] FT

Composing several analyses x := new C; do { b := x instanceof C; if (b) { x := x.foo(); } else { x := new D; } } while (...) class A { A foo() { return new A; } }; class C extends A { A foo() { return self; } }; class D extends A { }; Constant Propagation Class Analysis Inlining Unreachable code elimination

x := new C merge b := x instanceof C x := new Dx := x.foo() merge while(…) if (b) TF

x := new C b := x instanceof C x := new Dx := x.foo() if (b) PROPAGATE while(…) PROPAGATE [x → T ] [x → {C}] T merge TF PROPAGATE T

x := new C b := x instanceof C x := new Dx := x.foo() if (b) PROPAGATE while(…) PROPAGATE [x → T ] [x → {C}] T ([x → T ], [x → {C}], T, T ) merge PROPAGATE TF T

x := new C b := x instanceof C x := new Dx := x.foo() if (b) PROPAGATE ([x → T ], [x → {C}], T, T ) while(…) merge TF

x := new C b := x instanceof C x := new Dx := x.foo() if (b) while(…) PROPAGATE [x → T, b → T ] merge TF ([x → T ], [x → {C}], T, T )

x := new C b := x instanceof C x := new Dx := x.foo() if (b) ([x → T ], [x → {C}], T, T ) REPLACE b := true while(…) [x → T, b → T ] merge TF ([x → T ], [x → {C}], T, T )

b := true ([x → T ], [x → {C}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) PROPAGATE

([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) b := true ([x → T ], [x → {C}], T, T )

([x → T, b → true], [x → {C}, b → {Bool}], T, T ) x := new C b := x instanceof C x := new Dx := x.foo() if (b) Replacement graph is analyzed by composed analysis. When one analysis chooses a replacement graph, other analyses see it immediately. Analyses communicate implicitly through graph transformations while(…) merge TF ([x → T ], [x → {C}], T, T )

x := new C b := x instanceof C x := new Dx := x.foo() if (b) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) REPLACE σ while(…) merge TF ([x → T ], [x → {C}], T, T )

x := new C b := x instanceof C x := new Dx := x.foo() if (b) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) σσ while(…) merge TF ([x → T ], [x → {C}], T, T )

σ σ (,,, )

σ σ σ (,,, ) (,,, )

x := new C b := x instanceof C x := new Dx := x.foo() if (b) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) while(…) merge TF ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) (,,, ) ([x → T ], [x → {C}], T, T )

x := new C b := x instanceof C x := new Dx := x.foo() if (b) ([x → T ], [x → {C}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) while(…) merge TF ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) (,,, ) REPLACE (,,, )

(,,, ) (,,, ) (,,, )

x := new C b := x instanceof C x := new Dx := x.foo() if (b) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) while(…) merge TF ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) (,,, ) ([x → T ], [x → {C}], T, T ) (,,, )

σ x := new C b := x instanceof C x := new Dx := x.foo() if (b) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) REPLACE x := C::foo(x) while(…) merge T (,,, ) (,,, ) F ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T ], [x → {C}], T, T ) σ

x := C::foo(x) σ REPLACE x := x σ class C extends A { A foo() { return self; } }

x := x σ σ PROPAGATE

x := x σ σ σ

x := C::foo(x) σ σ σ

σ σ ([x → T, b → true], [x → {C}, b → {Bool}], T, T )

([x → T, b → true], [x → {C}, b → {Bool}], T, T ) x := new C b := x instanceof C x := x.foo() if (b) while(…) merge T ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) (,,, ) ([x → T ], [x → {C}], T, T ) (,,, ) x := new D F

x := new C b := x instanceof C x := x.foo() if (b) PROPAGATE ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) while(…) merge T x := new D F ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) (,,, ) ([x → T ], [x → {C}], T, T ) (,,, )

x := new C b := x instanceof C x := x.foo() if (b) PROPAGATE ([x → T ], [x → {C}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) while(…) merge T x := new D F ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) (,,, ) ([x → T ], [x → {C}], T, T ) (,,, )

x := new C b := x instanceof C x := x.foo() if (b) while(…) merge T x := new D F ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) ([x → T, b → true], [x → {C}, b → {Bool}], T, T ) (,,, ) ([x → T ], [x → {C}], T, T ) (,,, ) ([x → T ], [x → {C}], T, T )

x := new C b := x instanceof C x := x.foo() if (b) x := x b := true while(…) merge T x := new D F

x := new C b := true x := x x := new C; do { b := x instanceof C; if (b) { x := x.foo(); } else { x := new D; } } while (...) x := new C; do { b := true; x := x; } while (...) while(…) merge

x := new C; do { b := x instanceof C; if (b) { x := x.foo(); } else { x := new D; } } while (...) x := new C; do { b := true; x := x; } while (...) Analyses are defined modularly and separately. Combining them achieves the results of a monolithic analysis. If the analyses were run separately in any order any number of times, no optimizations could be performed.

Analysis followed by transformations

Integrating analysis and transformations

Composing analyses and transformations