Jeffrey D. Ullman Stanford University. 2  A never-published Stanford technical report by Fran Allen in 1968.  Fran won the Turing award in 2006.  Flow.

Slides:



Advertisements
Similar presentations
Example of Constructing the DAG (1)t 1 := 4 * iStep (1):create node 4 and i 0 Step (2):create node Step (3):attach identifier t 1 (2)t 2 := a[t 1 ]Step.
Advertisements

Data-Flow Analysis II CS 671 March 13, CS 671 – Spring Data-Flow Analysis Gather conservative, approximate information about what a program.
Synopsys University Courseware Copyright © 2012 Synopsys, Inc. All rights reserved. Compiler Optimization and Code Generation Lecture - 3 Developed By:
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) SSA Guo, Yao.
Intermediate Code Generation
Course Outline Traditional Static Program Analysis Software Testing
Lecture 11: Code Optimization CS 540 George Mason University.
Chapter 9 Code optimization Section 0 overview 1.Position of code optimizer 2.Purpose of code optimizer to get better efficiency –Run faster –Take less.
1 CS 201 Compiler Construction Lecture 3 Data Flow Analysis.
Loops or Lather, Rinse, Repeat… CS153: Compilers Greg Morrisett.
CMPUT Compiler Design and Optimization1 CMPUT680 - Winter 2006 Topic 5: Peep Hole Optimization José Nelson Amaral
1 Chapter 8: Code Generation. 2 Generating Instructions from Three-address Code Example: D = (A*B)+C =* A B T1 =+ T1 C T2 = T2 D.
Course Outline Traditional Static Program Analysis –Theory Compiler Optimizations; Control Flow Graphs Data-flow Analysis – today’s class –Classic analyses.
Control-Flow Graphs & Dataflow Analysis CS153: Compilers Greg Morrisett.
Chapter 10 Code Optimization. A main goal is to achieve a better performance Front End Code Gen Intermediate Code source Code target Code user Machine-
C Chuen-Liang Chen, NTUCS&IE / 321 OPTIMIZATION Chuen-Liang Chen Department of Computer Science and Information Engineering National Taiwan University.
1 Code Optimization Code produced by compilation algorithms can often be improved (ideally optimized) in terms of run-time speed and the amount of memory.
Jeffrey D. Ullman Stanford University. 2  Generalizes: 1.Moving loop-invariant computations outside the loop. 2.Eliminating common subexpressions. 3.True.
1 CS 201 Compiler Construction Lecture 7 Code Optimizations: Partial Redundancy Elimination.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Partial Redundancy Elimination Guo, Yao.
1 Really Basic Stuff Flow Graphs Constant Folding Global Common Subexpressions Induction Variables/Reduction in Strength.
Intermediate Representations Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
1 Copy Propagation What does it mean? Given an assignment x = y, replace later uses of x with uses of y, provided there are no intervening assignments.
1 CS 201 Compiler Construction Lecture 5 Code Optimizations: Copy Propagation & Elimination.
1 Intermediate representation Goals: –encode knowledge about the program –facilitate analysis –facilitate retargeting –facilitate optimization scanning.
4/23/09Prof. Hilfinger CS 164 Lecture 261 IL for Arrays & Local Optimizations Lecture 26 (Adapted from notes by R. Bodik and G. Necula)
Reduction in Strength CS 480. Our sample calculation for i := 1 to n for j := 1 to m c [i, j] := 0 for k := 1 to p c[i, j] := c[i, j] + a[i, k] * b[k,
Code Generation Professor Yihjia Tsai Tamkang University.
1 CS 201 Compiler Construction Lecture 1 Introduction.
1 CS 201 Compiler Construction Lecture 3 Data Flow Analysis.
CS 412/413 Spring 2007Introduction to Compilers1 Lecture 29: Control Flow Analysis 9 Apr 07 CS412/413 Introduction to Compilers Tim Teitelbaum.
Compiler Construction A Compulsory Module for Students in Computer Science Department Faculty of IT / Al – Al Bayt University Second Semester 2008/2009.
Machine-Independent Optimizations Ⅰ CS308 Compiler Theory1.
PSUCS322 HM 1 Languages and Compiler Design II IR Code Optimization Material provided by Prof. Jingke Li Stolen with pride and modified by Herb Mayer PSU.
Ben Livshits Based in part of Stanford class slides from
Optimizing Compilers Nai-Wei Lin Department of Computer Science and Information Engineering National Chung Cheng University.
1 CS 201 Compiler Construction Data Flow Analysis.
Topic #10: Optimization EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
IT253: Computer Organization Lecture 4: Instruction Set Architecture Tonga Institute of Higher Education.
U NIVERSITY OF M ASSACHUSETTS, A MHERST D EPARTMENT OF C OMPUTER S CIENCE Emery Berger University of Massachusetts, Amherst Advanced Compilers CMPSCI 710.
What’s in an optimizing compiler?
1 Code Generation Part II Chapter 8 (1 st ed. Ch.9) COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University,
1 Code Generation Part II Chapter 9 COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University, 2005.
CPS120: Introduction to Computer Science Decision Making in Programs.
1 Code optimization “Code optimization refers to the techniques used by the compiler to improve the execution efficiency of the generated object code”
Jeffrey D. Ullman Stanford University. 2 boolean x = true; while (x) {... // no change to x }  Doesn’t terminate.  Proof: only assignment to x is at.
1 CS 201 Compiler Construction Introduction. 2 Instructor Information Rajiv Gupta Office: WCH Room Tel: (951) Office.
Compiler Principles Fall Compiler Principles Lecture 0: Local Optimizations Roman Manevich Ben-Gurion University.
Synopsys University Courseware Copyright © 2012 Synopsys, Inc. All rights reserved. Compiler Optimization and Code Generation Lecture - 1 Developed By:
1 Data Flow Analysis Data flow analysis is used to collect information about the flow of data values across basic blocks. Dominator analysis collected.
1 Control Flow Analysis Topic today Representation and Analysis Paper (Sections 1, 2) For next class: Read Representation and Analysis Paper (Section 3)
More on Loop Optimization Data Flow Analysis CS 480.
Computer Organization Instructions Language of The Computer (MIPS) 2.
1 Compiler Construction (CS-636) Muhammad Bilal Bashir UIIT, Rawalpindi.
Code Optimization More Optimization Techniques. More Optimization Techniques  Loop optimization  Code motion  Strength reduction for induction variables.
Single Static Assignment Intermediate Representation (or SSA IR) Many examples and pictures taken from Wikipedia.
More Code Generation and Optimization Pat Morin COMP 3002.
Code Optimization Code produced by compilation algorithms can often be improved (ideally optimized) in terms of run-time speed and the amount of memory.
Code Optimization Overview and Examples
High-level optimization Jakub Yaghob
Optimization Code Optimization ©SoftMoore Consulting.
Control Flow Analysis CS 4501 Baishakhi Ray.
Unit IV Code Generation
Chapter 6 Intermediate-Code Generation
TARGET CODE GENERATION
Compiler Code Optimizations
Code Optimization Overview and Examples Control Flow Graph
Static Single Assignment Form (SSA)
Code Generation Part II
Code Optimization.
Presentation transcript:

Jeffrey D. Ullman Stanford University

2  A never-published Stanford technical report by Fran Allen in  Fran won the Turing award in  Flow graphs of intermediate code.  Key things worth doing.

 Steps with < 3 addresses (2 operands + result).  Assignments with < 1 arithmetic operator.  Examples: x = 0; x = y; x = y+z but not x = w+y*z.  Indirection and pointer access.  Examples: x = *p; p = &x; x = a[i]; x[i] = y.  Branches with one comparison operator, gotos.  Examples: if x == 0 goto s1; goto s2 but not if x == 0 || y == 1 goto s3.  Call, return.  Arguments copied like assignments w/o operator. 3

4  Here’s typical source code: for (i=0; i<n; i++) A[i] = 1.0;  Intermediate code exposes optimizable constructs we cannot see at source-code level. i = 0 L1: if i>n goto L2 t1 = 8*i A[t1] = 1.0 i = i+1 goto L1 L2:... Notice hidden offset calculation.

 Make flow explicit by breaking into basic blocks = sequences of steps with entry at beginning, exit at end, no branches in interior.  Break intermediate code at leaders = 1.First statement. 2.Statements targeted by a branch or goto. 3.Statements following a branch.  Simplification: make each intermediate-code statement its own basic block.  Saves the trouble of figuring out data flow within blocks. 5

6 i = 0 if i>n goto … t1 = 8*i A[t1] = 1.0 i = i+1 i = 0 L1: if i>n goto L2 t1 = 8*i A[t1] = 1.0 i = i+1 goto L1 L2:...

7  x is an induction variable at a point within a loop if it takes on a linear sequence of values each time through that point.  One induction variable can do the job of another.  Common case: loop index like i and computed array offset like t1.  We really don’t need i.  Replace multiplication by addition (reduction in strength).

 In the basic block t1 = 8*i; A[t1] = 1.0; i = i+1, t1 = 8i at the point where t1 is used, at A[t1] = 1.0.  Replace i = i+1 by t1 = t  Initialize t1 = 0.  Now, t1 always has value 8i where it is used, even though its calculation does not depend on i.  We can drop i = 0 and t1 = 8*i altogether if we replace the test i <= n. 8

9 i = 0 if i>n goto … t1 = 8*i A[t1] = 1.0 i = i+1 t1 = 0 n1 = 8*n if t1>n1 goto … A[t1] = 1.0 t1 = t1+8 Needed to express condition i <= n in terms of t1 Initialization block grows, but the loop block shrinks Question: is that always good?

10  Sometimes, a computation is done each time around a loop.  Move it before the loop to save n-1 computations.  Be careful: could n=0? I.e., the loop is typically executed 0 times.

11 i = 0 if i>n goto … t1 = y+z x = x+t1 i = i+1 i = 0 t1 = y+z if i>n goto … x = x+t1 i = i+1 Neither y nor z changes within the loop So move computation of t1 here and compute it only once

12  Sometimes a variable has a known constant value at a point.  If so, replacing the variable by the constant simplifies and speeds-up the code.  Easy within a basic block; harder across blocks.

13 i = 0 n = 100 if i>n goto … t1 = 8*i A[t1] = 1.0 i = i+1 t1 = 0 n1 = 8*100 if t1>n1 goto … A[t1] = 1.0 t1 = t1+8 Only possible assignment to this n1 is n1 = 800 t1 = 0 if t1>800 goto … A[t1] = 1.0 t1 = t1+8 So make the substitution at compile time Notice n1 is no longer needed Not only did we eliminate n1, but comparison of a variable And a constant can’t be worse than comparing two variables.

14  Suppose block B has a computation of x+y.  Suppose we are certain that when we reach this computation, we have: 1.Computed x+y, and 2.Not subsequently reassigned x or y.  Then we can hold the value of x+y in a temporary variable and use it in B.

15 a = x+y b = x+y c = x+y t = x+y a = t t = x+y b = t c = t t holds value of x+y Every place x+y might be computed prior to c = x+y We know t has the right value here

 Not known in  Sometimes, an expression has been computed along some paths to a block B, but not along others.  Replicate the block so the expression is computed exactly when it is needed. 16

17 t = x+y a = t b = t c = t t = x+y a = t t = x+y b = t c = t t = x+y b = t We may already have computed x+y So duplicate the block, depending on how it is reached Here, t already holds the value we need for b Here it doesn’t; so compute it