Computer Science 313 – Advanced Programming Topics.

Slides:



Advertisements
Similar presentations
8. Static Single Assignment Form Marcus Denker. © Marcus Denker SSA Roadmap  Static Single Assignment Form (SSA)  Converting to SSA Form  Examples.
Advertisements

Synopsys University Courseware Copyright © 2012 Synopsys, Inc. All rights reserved. Compiler Optimization and Code Generation Lecture - 3 Developed By:
Intermediate Code Generation
7. Optimization Prof. O. Nierstrasz Lecture notes by Marcus Denker.
ECE 454 Computer Systems Programming Compiler and Optimization (I) Ding Yuan ECE Dept., University of Toronto
Computer Architecture Lecture 7 Compiler Considerations and Optimizations.
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Control-Flow Graphs & Dataflow Analysis CS153: Compilers Greg Morrisett.
Chapter 10 Code Optimization. A main goal is to achieve a better performance Front End Code Gen Intermediate Code source Code target Code user Machine-
1 Code Optimization Code produced by compilation algorithms can often be improved (ideally optimized) in terms of run-time speed and the amount of memory.
Optimization Compiler Baojian Hua
Computer Science 1620 Variables and Memory. Review Examples: write a program that calculates and displays the average of the numbers 45, 69, and 106.
CS 536 Spring Global Optimizations Lecture 23.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
CS 536 Spring Intermediate Code. Local Optimizations. Lecture 22.
4/23/09Prof. Hilfinger CS 164 Lecture 261 IL for Arrays & Local Optimizations Lecture 26 (Adapted from notes by R. Bodik and G. Necula)
9. Optimization Marcus Denker. 2 © Marcus Denker Optimization Roadmap  Introduction  Optimizations in the Back-end  The Optimizer  SSA Optimizations.
Prof. Bodik CS 164 Lecture 171 Register Allocation Lecture 19.
4/25/08Prof. Hilfinger CS164 Lecture 371 Global Optimization Lecture 37 (From notes by R. Bodik & G. Necula)
Register Allocation (via graph coloring)
U NIVERSITY OF M ASSACHUSETTS, A MHERST D EPARTMENT OF C OMPUTER S CIENCE Emery Berger University of Massachusetts, Amherst Advanced Compilers CMPSCI 710.
Prof. Fateman CS 164 Lecture 221 Global Optimization Lecture 22.
Intermediate Code. Local Optimizations
Copy Propagation and Common Subexpression Elimination in Titanium Johnathon Jamison David Marin CS265 S. Graham.
Improving Code Generation Honors Compilers April 16 th 2002.
Introduction to Optimization Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved.
Introduction to a Programming Environment
Prof. Fateman CS164 Lecture 211 Local Optimizations Lecture 21.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
Machine-Independent Optimizations Ⅰ CS308 Compiler Theory1.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Advanced Compilers CMPSCI 710.
Prof. Bodik CS 164 Lecture 16, Fall Global Optimization Lecture 16.
Topic #10: Optimization EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
IT253: Computer Organization Lecture 4: Instruction Set Architecture Tonga Institute of Higher Education.
U NIVERSITY OF M ASSACHUSETTS, A MHERST D EPARTMENT OF C OMPUTER S CIENCE Emery Berger University of Massachusetts, Amherst Advanced Compilers CMPSCI 710.
Computer Science 313 – Advanced Programming Topics.
Code Optimization 1 Course Overview PART I: overview material 1Introduction 2Language processors (tombstone diagrams, bootstrapping) 3Architecture of a.
Computer Science 313 – Advanced Programming Topics.
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
Computer Science 313 – Advanced Programming Topics.
1 Code optimization “Code optimization refers to the techniques used by the compiler to improve the execution efficiency of the generated object code”
Compiler Principles Fall Compiler Principles Lecture 0: Local Optimizations Roman Manevich Ben-Gurion University.
Simplifying Expressions. The Commutative and Associative Properties of Addition and Multiplication allow you to rearrange an expression to simplify it.
Computer Science 313 – Advanced Programming Topics.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
Compiler Optimizations ECE 454 Computer Systems Programming Topics: The Role of the Compiler Common Compiler (Automatic) Code Optimizations Cristiana Amza.
More on Loop Optimization Data Flow Analysis CS 480.
3/2/2016© Hal Perkins & UW CSES-1 CSE P 501 – Compilers Optimizing Transformations Hal Perkins Autumn 2009.
CS412/413 Introduction to Compilers and Translators April 2, 1999 Lecture 24: Introduction to Optimization.
U NIVERSITY OF D ELAWARE C OMPUTER & I NFORMATION S CIENCES D EPARTMENT Optimizing Compilers CISC 673 Spring 2011 Data flow analysis John Cavazos University.
4 - Conditional Control Structures CHAPTER 4. Introduction A Program is usually not limited to a linear sequence of instructions. In real life, a programme.
©SoftMoore ConsultingSlide 1 Code Optimization. ©SoftMoore ConsultingSlide 2 Code Optimization Code generation techniques and transformations that result.
Code Optimization Code produced by compilation algorithms can often be improved (ideally optimized) in terms of run-time speed and the amount of memory.
Code Optimization Overview and Examples
Introduction to Optimization
Code Optimization.
Compilers Principles, Techniques, & Tools Taught by Jing Zhang
Static Single Assignment
Optimization Code Optimization ©SoftMoore Consulting.
Basic Block Optimizations
Topic 10: Dataflow Analysis
Introduction to Optimization
Optimizing Transformations Hal Perkins Autumn 2011
Optimizing Transformations Hal Perkins Winter 2008
Static Single Assignment Form (SSA)
Introduction to Optimization
Optimizing Compilers CISC 673 Spring 2009 Data flow analysis
Live Variables – Basic Block
Basic Block Optimizations
Code Optimization.
Presentation transcript:

Computer Science 313 – Advanced Programming Topics

Optimization Speed  Global optimizations improve whole methods  Some do more and look at entire programs  Can provide big improvement if code written well  But takes a lot of time and may not pay off in Java  Optimize basic blocks to simplify analysis  Lets compiler assume all instructions executed  Were created first, so much simpler to develop  Quicker & often improves performance decently

Value Numbering  Value numbering biggest local optimization  Approach created by Cocke & Schwartz in 1970  Eliminates common subexpressions found in code  Finds and “folds” constants so code is simplified  Global CSE similar, but uses entire method  Based upon value numbering, but adds deadness  SSA form not required but can improve results

Before Value Numbering...  Complex lines rewritten as simple instructions  Lines split up by adding temporary variables  Simple operations needed by processor anyway j = Math.pow((k + 1) * 5, i.foo); becomes

Before Value Numbering...  Complex lines rewritten as simple instructions  Lines split up by adding temporary variables  Simple operations needed by processor anyway j = Math.pow((k + 1) * 5, i.foo); becomes t1 = k + 1;

Before Value Numbering...  Complex lines rewritten as simple instructions  Lines split up by adding temporary variables  Simple operations needed by processor anyway j = Math.pow((k + 1) * 5, i.foo); becomes t1 = k + 1; t2 = t1 * 5;

Before Value Numbering...  Complex lines rewritten as simple instructions  Lines split up by adding temporary variables  Simple operations needed by processor anyway j = Math.pow((k + 1) * 5, i.foo); becomes t1 = k + 1; t2 = t1 * 5; t3 = address of i; t4 = t3 + offset of field foo; t5 = *t4;

Before Value Numbering...  Complex lines rewritten as simple instructions  Lines split up by adding temporary variables  Simple operations needed by processor anyway j = Math.pow((k + 1) * 5, i.foo); becomes t1 = k + 1; t2 = t1 * 5; t3 = address of i; t4 = t3 + offset of field foo; t5 = *t4; j = Math.pow(t2,t5);

Before Value Numbering...  Complex lines rewritten as simple instructions  Lines split up by adding temporary variables  Simple operations needed by processor anyway  Once done, expressions should be written as: result leftOperand rightOperand result  leftOperand OP rightOperand or result value result  value or methodCallparam1param2param3param4 methodCall(param1, param2, param3, param4)

Key Ideas Used  Use maps associating expression to number  If mapped to same number, expressions equal  Available expressions reused not recalculated  Number meaningless, just very easy to compare  Create tuple representing each expression  Tag specifies if constant, value, calculation  Needs map from expression to object to work

Value Numbering Outline value result value If instruction is simple assignment then Retrieve number for value (if it was not there, add it) Update maps so result mapped to getNumber(value) leftOperand leftOperand rightOperand rightOperand result result result result result Else if instruction is assignment then lNum  number for leftOperand (if it was not there, add it) If lNum.isConstant() then leftOperand  lNum.value rNum  number for rightOperand (if it was not there, add it) If rNum.isConstant() then rightOperand  rNum.value If lNum.isConstant() && rNum.isConstant() then Update maps so result mapped to evaluate(lNum, OP, rNum)) Rewrite instruction as “result  ”+ evaluate(lNum, OP, rNum) Else if mappedToNumber(lNum, OP, rNum) then Update maps so result mapped to getNumber(lNum, OP, rNum) Rewrite instruction as “result  ”+ getNumber(lNum, OP, rNum).value Else Update maps so result mapped to newNumber(lNum, OP, rNum)

Result of This Pass  Automatically expands & evaluates constants  Propagates through block whenever possible  Ensures block will compute value only once  Change other computations with assignment  Simplifies code to use only original assignment  Dead assignments eliminated in second pass  Programmer’s code not major limitation  Lines split into simple instructions before starting  Makes it possible to replaces portion of line

Example Original a  4 k  (i * j) + 5 m  5 * a * k n  i b  n * j + i * aSimplified

Example Original a  4 k  (i * j) + 5 m  5 * a * k n  i b  n * j + i * aSimplified a  4 t1  i * j k  t1 + 5 t2  5 * a m  t2 * k n  i t3  n * j t4  i * a b  t3 + t4

Example MapsNumbered a  4 t1  i * j k  t1 + 5 t2  5 * a m  t2 * k n  i t3  n * j t4  i * a b  t3 + t4 #TypeValue

Example Numbered a 1  4 1 t1 4  i 2 * j 3 k 6  t t2 7  5 5 * a 1 m 8  t2 7 * k 6 n 2  i 2 t3 4  n 2 * j 3 t4 9  i 2 * a 1 b 10  t3 4 + t4 9Rewritten a  4 t1  i * j k  t1 + 5 t2  20 m  20 * k n  i t3  t1 t4  i * 4 b  t1 + t4

For Next Class  Lab due Friday before noon  Please, please, please do not wait until last minute…  Design is hard, but code is very simple  Give yourself time to think and ask questions  Read up on our last design pattern  Frequently used for distributed coding  Multi-tier processing uses this also  Becoming increasingly common to see this pattern