Advanced Compiler Design Early Optimizations. Introduction Constant expression evaluation (constant folding)  dataflow independent Scalar replacement.

Slides:



Advertisements
Similar presentations
CSC 4181 Compiler Construction Code Generation & Optimization.
Advertisements

Synopsys University Courseware Copyright © 2012 Synopsys, Inc. All rights reserved. Compiler Optimization and Code Generation Lecture - 3 Developed By:
7. Optimization Prof. O. Nierstrasz Lecture notes by Marcus Denker.
Lecture 11: Code Optimization CS 540 George Mason University.
Chapter 9 Code optimization Section 0 overview 1.Position of code optimizer 2.Purpose of code optimizer to get better efficiency –Run faster –Take less.
1 Chapter 8: Code Generation. 2 Generating Instructions from Three-address Code Example: D = (A*B)+C =* A B T1 =+ T1 C T2 = T2 D.
8. Code Generation. Generate executable code for a target machine that is a faithful representation of the semantics of the source code Depends not only.
Control-Flow Graphs & Dataflow Analysis CS153: Compilers Greg Morrisett.
CS 31003: Compilers Introduction to Phases of Compiler.
Chapter 10 Code Optimization. A main goal is to achieve a better performance Front End Code Gen Intermediate Code source Code target Code user Machine-
C Chuen-Liang Chen, NTUCS&IE / 321 OPTIMIZATION Chuen-Liang Chen Department of Computer Science and Information Engineering National Taiwan University.
1 Code Optimization Code produced by compilation algorithms can often be improved (ideally optimized) in terms of run-time speed and the amount of memory.
Program Representations. Representing programs Goals.
The Last Lecture Copyright 2011, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 512 at Rice University have explicit permission.
Optimization Compiler Baojian Hua
Introduction to Advanced Topics Chapter 1 Mooly Sagiv Schrierber
Introduction to Code Optimization Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice.
Early Global Program Optimizations Chapter Mooly Sagiv.
Representing programs Goals. Representing programs Primary goals –analysis is easy and effective just a few cases to handle directly link related things.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
CS 536 Spring Intermediate Code. Local Optimizations. Lecture 22.
1 Intermediate representation Goals: –encode knowledge about the program –facilitate analysis –facilitate retargeting –facilitate optimization scanning.
4/23/09Prof. Hilfinger CS 164 Lecture 261 IL for Arrays & Local Optimizations Lecture 26 (Adapted from notes by R. Bodik and G. Necula)
9. Optimization Marcus Denker. 2 © Marcus Denker Optimization Roadmap  Introduction  Optimizations in the Back-end  The Optimizer  SSA Optimizations.
Introduction to Program Optimizations Chapter 11 Mooly Sagiv.
Case Studies of Compilers and Future Trends Chapter 21 Mooly Sagiv.
Early Program Optimizations Chapter 12 Mooly Sagiv.
Data-Flow Analysis (Chapter 11-12) Mooly Sagiv Make-up class 18/ :00 Kaplun 324.
1 Copy Propagation What does it mean? – Given an assignment x = y, replace later uses of x with uses of y, provided there are no intervening assignments.
Intermediate Code. Local Optimizations
Improving Code Generation Honors Compilers April 16 th 2002.
Recap from last time: live variables x := 5 y := x + 2 x := x + 1 y := x y...
Introduction to Optimization Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved.
Prof. Fateman CS164 Lecture 211 Local Optimizations Lecture 21.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
Machine-Independent Optimizations Ⅰ CS308 Compiler Theory1.
PSUCS322 HM 1 Languages and Compiler Design II IR Code Optimization Material provided by Prof. Jingke Li Stolen with pride and modified by Herb Mayer PSU.
Procedure Optimizations and Interprocedural Analysis Chapter 15, 19 Mooly Sagiv.
Optimizing Compilers Nai-Wei Lin Department of Computer Science and Information Engineering National Chung Cheng University.
Invitation to Computer Science 5th Edition
Topic #10: Optimization EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
Chapter 1 Introduction Dr. Frank Lee. 1.1 Why Study Compiler? To write more efficient code in a high-level language To provide solid foundation in parsing.
What’s in an optimizing compiler?
DEPARTMENT OF COMPUTER SCIENCE & TECHNOLOGY FACULTY OF SCIENCE & TECHNOLOGY UNIVERSITY OF UWA WELLASSA 1 CST 221 OBJECT ORIENTED PROGRAMMING(OOP) ( 2 CREDITS.
Compiler Chapter# 5 Intermediate code generation.
Unit-1 Introduction Prepared by: Prof. Harish I Rathod
1 June 4, June 4, 2016June 4, 2016June 4, 2016 Azusa, CA Sheldon X. Liang Ph. D. Azusa Pacific University, Azusa, CA 91702, Tel: (800)
1 Code optimization “Code optimization refers to the techniques used by the compiler to improve the execution efficiency of the generated object code”
Compiler Principles Fall Compiler Principles Lecture 0: Local Optimizations Roman Manevich Ben-Gurion University.
Lexical analyzer Parser Semantic analyzer Intermediate-code generator Optimizer Code Generator Postpass optimizer String of characters String of tokens.
1 Compiler Design (40-414)  Main Text Book: Compilers: Principles, Techniques & Tools, 2 nd ed., Aho, Lam, Sethi, and Ullman, 2007  Evaluation:  Midterm.
Chapter 1 Introduction Study Goals: Master: the phases of a compiler Understand: what is a compiler Know: interpreter,compiler structure.
3/2/2016© Hal Perkins & UW CSES-1 CSE P 501 – Compilers Optimizing Transformations Hal Perkins Autumn 2009.
CS412/413 Introduction to Compilers and Translators April 2, 1999 Lecture 24: Introduction to Optimization.
Unit 1 Review By: Mr. Jacobs.
CS416 Compiler Design1. 2 Course Information Instructor : Dr. Ilyas Cicekli –Office: EA504, –Phone: , – Course Web.
Dr. Hussien Sharaf Dr Emad Nabil. Dr. Hussien M. Sharaf 2 position := initial + rate * Lexical analyzer 2. Syntax analyzer id 1 := id 2 + id 3 *
©SoftMoore ConsultingSlide 1 Code Optimization. ©SoftMoore ConsultingSlide 2 Code Optimization Code generation techniques and transformations that result.
Code Optimization Code produced by compilation algorithms can often be improved (ideally optimized) in terms of run-time speed and the amount of memory.
Code Optimization Overview and Examples
Compiler Design (40-414) Main Text Book:
High-level optimization Jakub Yaghob
Code Optimization.
Introduction to Advanced Topics Chapter 1 Text Book: Advanced compiler Design implementation By Steven S Muchnick (Elsevier)
Compiler Construction (CS-636)
Revision Lecture
Optimization Code Optimization ©SoftMoore Consulting.
Interprocedural Analysis Chapter 19
8 Code Generation Topics A simple code generator algorithm
Optimization 薛智文 (textbook ch# 9) 薛智文 96 Spring.
Presentation transcript:

Advanced Compiler Design Early Optimizations

Introduction Constant expression evaluation (constant folding)  dataflow independent Scalar replacement of aggregates  dataflow independent Algebraic simplifications and reassociations  dataflow independent Value numbering  dataflow dependent Copy propagation  dataflow dependent Sparse conditional constant propagation  dataflow dependent

Introduction Usually performed at early phases in the optimization process Many of these optimizations belong to the group of the most important techniques:  Constant folding  Algebraic simplifications and reassociations  Global value numbering  Sparse conditional constant propagation

Introduction Lexical Analyzer Parser Semantic Analyzer Translator Optimizer Final Assembly Low level intermediate code Lexical Analyzer Parser Semantic Analyzer Intermediate code generator Optimizer Postpass optimizer Medium level intermediate code code generator Medium level intermediate code Low level modelMixed level model

Introduction A) Optimizations performed to source code or high-level intermediate code B-C) Medium or low-level intermediate code (depending on the model: mixed or low-level) D) Always on low-level, machine depemdent E) link time, operate on relocatable object code.

Introduction A) Scalar replacement of array references Data-cache optimizations B) Procedure integration Tail-call optimization Scalar replecement of aggregates Sparse conditional constant propagation Interprocedural constant propagation Procedure specialization and cloning Sparse conditional constant propagation C1) Global value numbering Local and global copy propagation Sparse conditional constant propagation Constant folding C2 C3 Algebraic simplification C4DE

Constant Expression Evaluation Compile time evaluation of those expressions, whose operands are known to be constant. Three phases:  Determine that the operands are constant  Calculate the value of the expression  Replace the expression using the calculated value

Constant Expression Evaluation Best structured as a subroutine that can be invoked from anywhere in the optimizer Operations and datatypes used in the evalutation must match those in the target architecture The effectiveness of constant expression evaluation can be increased by constant propagation

Constant Expression Evaluation Applicability  Integers: Almost allways applicable Exception: division by zero, overflows  Addressing Arithmetic No problems  Floating point values: Many exceptions, compiler’s arithmetic must match to the processor

Scalar replacement of aggregates Makes other optimizations applicable to the components of aggregates Principle:  Determine which aggregate components have simple scalar values and are not aliased  Assign them to tempories, which values match those in the aggregates

Scalar replacement of aggregates Example typedef enum {APPLE, BANANA, ORANGE} VARIETY; typedef enum {LONG, ROUND} SHAPE; typedef struct fruit { VARIETY variety; SHAPE shape; } FRUIT Main() { FRUIT snack; snack.variety = APPLE; snack.shape = ROUND; … } Main() { VARIETY t1 = APPLE; SHAPE t2 = ROUND; … }

Scalar replacement of aggregates Should be performed on very early on in the optimization process

Algebraic simplifications and reassociations Algebraic simplifications  Algebraic properties of the operators are used to simplify the expressions Algebraic reassociations  Uses specific algebraic properties: Associativity, commutativity, and distributivity  Divides an expression into parts that are constant, loop-invariant, variable.

Algebraic simplifications and reassociations Best structured as a subroutine that can be invoked from anywhere in the optimizer Used from many different phases in the whole optimization process.

Algebraic simplifications and reassociations Examples i+0=0+1=i-0=i 0-i=-i i*1=1*i=i/1=i -(-i)=i i+(-j)=i-j b v true = true v b = true b v false = false v b = b

Algebraic simplifications and reassociations Examples i^2 = i*i, 2*i = i+i i*5  t:=i shl 2 (shl = shift left)  t:=t+i i*7  t:=i shl 3  t:=t-i (i-j) + (i-j) + (i-j) + (i-j) = 4*i – 4*j

Algebraic simplifications and reassociations Examples j=0, k=1*j, i=i+k*1 j=0, k=0, i=i

Algebraic simplifications and reassociations of Adressing Expression Overflows makes no differences in address computations The general strategy is canonicalization. Example: var a: array[lo1..hi1, lo2..hi2] of eltype var i,j: integer do j=lo2 to hi2 begin a[i,j] := b + a[i,j] end  a[i,j]: base_a + ((i-lo1)*(hi2-lo2+1)+j-lo2)*w  W = sizeof(eltype)

Algebraic reassociations of Adressing Expression a[i,j]:  -(lo1*(hi2-lo2+1)-lo2)*w+base_a compiletime constant  + (hi2-lo2+1)*i*w loop invariant  +j*w variable

Algebraic reassociations of Floating point Expression Can be applied only in rare cases

Algebraic reassociations Applicability Integers  Overflows, exceptions Adressing Expression  Allways Floating point Expression  In rare cases  Overflows, exceptions

Value numbering Determines that two computations are equivalent and eliminates one of them. Similar optimizations:  Sparse conditional constant propagation  Common-subexpression evaluation  Partial-redundancy evaluation

Value numbering Examples read(i) j := i+1 k := i l := k+1 Value numbering: read(i) t := i+1 j := t k := i l := t i:= 2 j:= i*2 k:= i+2 Copy propagation: i:= 2 j:= 2*2 k:= 2+2

Value numbering Algorithms Basic principle:  Associates a value to each computation  Any two computations with same value always computes the same result Basic block  Also extended basic blocks Global form (Alpern, Wegman and Zadeck)  Requires that the procedure is in the SSA form

Value numbering Global form Two variables are congruent:  If the defining statement has identical operators  Operands are congruent

Copy propagation Transformation that,  if we have assignment x := y,  replace all the later uses of x with y, as long as the intervening instructions have not changed the value of x Similar optimizations  Register coalescing (16.3)

Copy propagation Can be divided into a local and global phases  Local phase: operating within individual basic block  Global phase: operating across the whole flowgraph

Copy propagation Example b := a c := 4*b c > b d:=b+2 e:=a + b NY b := a c := 4*a c > a d:=a+2 e:=a + a NY Copy propagation result

Copy propagation There is O(n) algorithm to solve the copy propagation Input: MIR instructions Block[m][1],…,Block[m][n]

Sparse conditional constant propagation Constant propagation is a transformation that  if we have assignment x := constant  replace all the later uses of x with constant, as long as the intervening instructions have not changed the value of x Particular important in RISC architectures

Sparse conditional constant propagation Example b := 3 c := 4*b c > b d:=b+2 e:=a + b NY b := 3 c := 4*3 c > 3 d:=3+2 e:=a + 3 NY Constant propagation result

Sparse conditional constant propagation Algorithms  Wegman and Zadeck  SSA form (more efficient) and without SSA form  O(n)