Interval Partitioning of a Flow Graph

Slides:



Advertisements
Similar presentations
Example of Constructing the DAG (1)t 1 := 4 * iStep (1):create node 4 and i 0 Step (2):create node Step (3):attach identifier t 1 (2)t 2 := a[t 1 ]Step.
Advertisements

Compiler Construction
Data-Flow Analysis II CS 671 March 13, CS 671 – Spring Data-Flow Analysis Gather conservative, approximate information about what a program.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) SSA Guo, Yao.
Lecture 11: Code Optimization CS 540 George Mason University.
Chapter 9 Code optimization Section 0 overview 1.Position of code optimizer 2.Purpose of code optimizer to get better efficiency –Run faster –Take less.
Architecture-dependent optimizations Functional units, delay slots and dependency analysis.
1 CS 201 Compiler Construction Machine Code Generation.
Course Outline Traditional Static Program Analysis –Theory Compiler Optimizations; Control Flow Graphs Data-flow Analysis – today’s class –Classic analyses.
Chapter 10 Code Optimization. A main goal is to achieve a better performance Front End Code Gen Intermediate Code source Code target Code user Machine-
Control Flow Analysis. Construct representations for the structure of flow-of-control of programs Control flow graphs represent the structure of flow-of-control.
1 Code Optimization. 2 The Code Optimizer Control flow analysis: control flow graph Data-flow analysis Transformations Front end Code generator Code optimizer.
1 Introduction to Data Flow Analysis. 2 Data Flow Analysis Construct representations for the structure of flow-of-data of programs based on the structure.
1 CS 201 Compiler Construction Lecture 7 Code Optimizations: Partial Redundancy Elimination.
Partial Redundancy Elimination. Partial-Redundancy Elimination Minimize the number of expression evaluations By moving around the places where an expression.
6/9/2015© Hal Perkins & UW CSEU-1 CSE P 501 – Compilers SSA Hal Perkins Winter 2008.
CS 536 Spring Global Optimizations Lecture 23.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
CS 536 Spring Intermediate Code. Local Optimizations. Lecture 22.
1 Intermediate representation Goals: –encode knowledge about the program –facilitate analysis –facilitate retargeting –facilitate optimization scanning.
Code Generation Professor Yihjia Tsai Tamkang University.
4/25/08Prof. Hilfinger CS164 Lecture 371 Global Optimization Lecture 37 (From notes by R. Bodik & G. Necula)
1 CS 201 Compiler Construction Lecture 3 Data Flow Analysis.
2015/6/24\course\cpeg421-10F\Topic1-b.ppt1 Topic 1b: Flow Analysis Some slides come from Prof. J. N. Amaral
Data Flow Analysis Compiler Design October 5, 2004 These slides live on the Web. I obtained them from Jeff Foster and he said that he obtained.
CS 412/413 Spring 2007Introduction to Compilers1 Lecture 29: Control Flow Analysis 9 Apr 07 CS412/413 Introduction to Compilers Tim Teitelbaum.
Prof. Fateman CS 164 Lecture 221 Global Optimization Lecture 22.
Intermediate Code. Local Optimizations
Improving Code Generation Honors Compilers April 16 th 2002.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
Compiler Construction A Compulsory Module for Students in Computer Science Department Faculty of IT / Al – Al Bayt University Second Semester 2008/2009.
Machine-Independent Optimizations Ⅰ CS308 Compiler Theory1.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Prof. Bodik CS 164 Lecture 16, Fall Global Optimization Lecture 16.
1 Region-Based Data Flow Analysis. 2 Loops Loops in programs deserve special treatment Because programs spend most of their time executing loops, improving.
Topic #10: Optimization EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
Review Binary Tree Binary Tree Representation Array Representation Link List Representation Operations on Binary Trees Traversing Binary Trees Pre-Order.
Introduction For some compiler, the intermediate code is a pseudo code of a virtual machine. Interpreter of the virtual machine is invoked to execute the.
1 Code Generation Part II Chapter 8 (1 st ed. Ch.9) COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University,
1 Code Generation Part II Chapter 9 COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University, 2005.
1 Code optimization “Code optimization refers to the techniques used by the compiler to improve the execution efficiency of the generated object code”
Compiler Principles Fall Compiler Principles Lecture 0: Local Optimizations Roman Manevich Ben-Gurion University.
Compilers Modern Compiler Design
CS 614: Theory and Construction of Compilers Lecture 15 Fall 2003 Department of Computer Science University of Alabama Joel Jones.
CS412/413 Introduction to Compilers Radu Rugina Lecture 18: Control Flow Graphs 29 Feb 02.
1 Control Flow Graphs. 2 Optimizations Code transformations to improve program –Mainly: improve execution time –Also: reduce program size Can be done.
1 CS 201 Compiler Construction Lecture 2 Control Flow Analysis.
Loops Simone Campanoni
1 Chapter10: Code generator. 2 Code Generator Source Program Target Program Semantic Analyzer Intermediate Code Generator Code Optimizer Code Generator.
High-level optimization Jakub Yaghob
Machine-Independent Optimization
Control Flow Analysis CS 4501 Baishakhi Ray.
Unit IV Code Generation
Chapter 6 Intermediate-Code Generation
CS 201 Compiler Construction
TARGET CODE GENERATION
Topic 4: Flow Analysis Some slides come from Prof. J. N. Amaral
Code Optimization Overview and Examples Control Flow Graph
Optimizations using SSA
Control Flow Analysis (Chapter 7)
Data Flow Analysis Compiler Design
Static Single Assignment
8 Code Generation Topics A simple code generator algorithm
Optimization 薛智文 (textbook ch# 9) 薛智文 96 Spring.
Intermediate Code Generation
Compiler Construction
Taken largely from University of Delaware Compiler Notes
Code Generation Part II
CSE P 501 – Compilers SSA Hal Perkins Autumn /31/2019
Code Optimization.
Presentation transcript:

Interval Partitioning of a Flow Graph Input: a flow graph G = (N, E, n0). Output: a partition of G into a set of disjoint intervals. Method: for each n, compute I(n). I(n) := {n}; while there exists a node m  n0 all of whose predecessors are in I(n) do I(n) := I(n)  {m} The headers of the intervals are chosen as follows: construct I(n0) and “select” all nodes in that interval, while there is a node m not yet “selected” but with a selected predecessor do construct I(m) and “select” all nodes in that interval. Note: the order of selection does not affect final partitioning

Interval Partitioning Example Initially: I(1) = {1} -1 is selected Pick node 2: I(2) := {2} then execute while loop I(2) := {2} U {3} U {4} U {5} 2, 3, 4, 5 are selected Note: After 1, the only node we can pick is 2, because it is the only one with selected predecessor! 1 Natural loop 2 3 4 5 Note: node 2 dominates nodes 3, 4, 5. node 2 is the only entry of the interval (header).

Interval Graphs From the intervals of one flow graph G one may construct a new flow graph I(G) by the following rules: Nodes: The nodes of I(G) correspond to the intervals in the interval partition of G. Initial node: The initial node of I(G) is the interval that contains the initial node of G. Edges: There is an edge from interval I to interval J if and only if there is an edge from some node in I to the header of J. Note: There could not be an edge entering some node n of J other than the header, because there would be no way n could have added to J in interval partitioning algorithm.

Interval Graphs (Cont) Limit flow graph of G: Applying interval partitioning and interval graph construction in alternation, leads to a sequence of graphs G, I(G), I(I(G)) … In(G) where the nodes of In(G) are in one interval. In(G) is the limit. 1 3 2 4 5 7 6 8 10 9 1,2 4,5,6 7,8,9,10 3, …, 10 1, ..., 10 4, …, 10 Property: A flow graph is reducible if and only if its limit flow graph is a single node (historic definition).

T1-T2 Analysis Motivation: A convenient way to achieve same effect as interval analysis. Definition: Repeatedly apply two simple trans formations to flow graphs: - T1: If n is a node with a loop, i.e., an edge nn, delete that edge. - T2: If there is a node n  n0 that has a unique predecessor, m, then m may consume n by deleting n and making all successors of n (including m, possibly) be successors of m. Facts: - By applying (T1 | T2 )k (G) until no further application is possible then a unique flow-graph results. - The flow graph (T1 | T2 )k (G) is the limit flow-graph of G.

Example of T1 - T2 Analysis ab abcd b c b cd b cd cb d

Example of T1 - T2 Analysis Regions: a set of nodes N with a header dominating all other nodes. Property: While reducing a flow graph with T1-T2 following holds all the time: T2 T1 T2 T2 a a a ab abcd b c b cd b cd cb d T2 T1 T2 T2 A node represents a region of G. An edge from a to b represents a set of edges of G. Each is from some node in a to header of b. -Each node and edge of G is represented by exactly one node or edge of the current graph.

‘Optimizations’ of Basic Blocks Equivalent transformations: Two basic block are equivalent if they compute the same set of expressions. -Expressions: are the values of the live variables at the exit of the block. Two important classes of local transformations: -structure preserving transformations: common sub expression elimination dead code elimination renaming of temporary variables interchange of two independent adjacent statements. -algebraic transformations (countlessly many): simplify expressions replace expensive operations with cheaper ones.

The DAG Representation of Basic Blocks Directed acyclic graphs (DAGs) give a picture of how the value computed by each statement in the basic block is used in the subsequent statements of the block. Definition: a dag for a basic block is a directed acyclic graph with the following labels on nodes: leaves are labeled with either variable names or constants. they are unique identifiers from operators we determine whether l- or r-value. represent initial values of names. Subscript with 0. interior nodes are labeled by an operator symbol. Nodes are also (optionally) given a sequence of identifiers for labels. - interior node  computed values - identifiers in the sequence – have that value.

Example of DAG Representation t4:= b[t3] t5:= t2 * t4 t6:= prod + t5 prod:= t6 t7:= i + 1 i:= t7 if i <= 20 goto 1 t6, prod + t5 * prod t4 (1) t2 [] [] <= t1, t3 t7, i * + a b 20 4 i0 1 Three address code Corresponding DAG Utility: Constructing a dag from 3AS is a good way of determining: common sub expressions (expressions computed more than once), which names are used inside the block but evaluated outside, which statements of the block could have their computed value used outside the block.

Constructing a DAG Input: a basic block. Statements: (i) x:= y op z (ii) x:= op y (iii) x:= y Output: a dag for the basic block containing: - a label for each node. For leaves an identifier - constants are permitted. For interior nodes an operator symbol. - for each node a (possibly empty) list of attached identifiers - constants not permitted. Method: Initially assume there are no nodes, and node is undefined. If node(y) is undefined: created a leaf labeled y, let node(y) be this node. In case(i) if node(z) is undefined create a leaf labeled z and that leaf be node(z). In case(i) determine if there is a node labeled op whose left child is node(y) and right child is node(z). If not create such a node, let be n. case(ii), (iii) similar. Delete x from the list attached to node(x). Append x to the list of identify for node n and set node(x) to n.