Supercomputing and Science An Introduction to High Performance Computing Part IV: Dependency Analysis and Stupid Compiler Tricks Henry Neeman, Director.

Slides:



Advertisements
Similar presentations
CSC 4181 Compiler Construction Code Generation & Optimization.
Advertisements

7. Optimization Prof. O. Nierstrasz Lecture notes by Marcus Denker.
ECE 454 Computer Systems Programming Compiler and Optimization (I) Ding Yuan ECE Dept., University of Toronto
EECC551 - Shaaban #1 Fall 2003 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
1 COMP 740: Computer Architecture and Implementation Montek Singh Tue, Feb 24, 2009 Topic: Instruction-Level Parallelism IV (Software Approaches/Compiler.
1 Code Optimization Code produced by compilation algorithms can often be improved (ideally optimized) in terms of run-time speed and the amount of memory.
Dependency Analysis We want to “parallelize” program to make it run faster. For that, we need dependence analysis to ensure correctness.
Supercomputing in Plain English Stupid Compiler Tricks PRESENTERNAME PRESENTERTITLE PRESENTERDEPARTMENT PRESENTERINSTITUTION DAY MONTH DATE YEAR Your Logo.
Vector Processing. Vector Processors Combine vector operands (inputs) element by element to produce an output vector. Typical array-oriented operations.
Parallell Processing Systems1 Chapter 4 Vector Processors.
Fall 2011SYSC 5704: Elements of Computer Systems 1 SYSC 5704 Elements of Computer Systems Optimization to take advantage of hardware.
Introduction to Analysis of Algorithms
Computational Astrophysics: Methodology 1.Identify astrophysical problem 2.Write down corresponding equations 3.Identify numerical algorithm 4.Find a computer.
EECC551 - Shaaban #1 Winter 2002 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
Friday, September 15, 2006 The three most important factors in selling optimization are location, location, location. - Realtor’s creed.
4/23/09Prof. Hilfinger CS 164 Lecture 261 IL for Arrays & Local Optimizations Lecture 26 (Adapted from notes by R. Bodik and G. Necula)
EECC551 - Shaaban #1 Spring 2006 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
Reduction in Strength CS 480. Our sample calculation for i := 1 to n for j := 1 to m c [i, j] := 0 for k := 1 to p c[i, j] := c[i, j] + a[i, k] * b[k,
EECC551 - Shaaban #1 Fall 2005 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
CMPUT Compiler Design and Optimization1 CMPUT680 - Winter 2006 Topic B: Loop Restructuring José Nelson Amaral
U NIVERSITY OF M ASSACHUSETTS, A MHERST D EPARTMENT OF C OMPUTER S CIENCE Emery Berger University of Massachusetts, Amherst Advanced Compilers CMPSCI 710.
EECC551 - Shaaban #1 Spring 2004 lec# Definition of basic instruction blocks Increasing Instruction-Level Parallelism & Size of Basic Blocks.
Topic #10: Optimization EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
Parallel Programming & Cluster Computing Instruction Level Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University.
Supercomputing in Plain English Part IV: Stupid Compiler Tricks Henry Neeman, Director OU Supercomputing Center for Education & Research University of.
Supercomputing in Plain English Supercomputing in Plain English Stupid Compiler Tricks Henry Neeman, Director Director, OU Supercomputing Center for Education.
IT253: Computer Organization Lecture 4: Instruction Set Architecture Tonga Institute of Higher Education.
Supercomputing in Plain English Instruction Level Parallelism Henry Neeman Director OU Supercomputing Center for Education & Research October
Parallel Programming & Cluster Computing Instruction Level Parallelism Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October.
Supercomputing in Plain English An Introduction to High Performance Computing Part III: Instruction Level Parallelism and Scalar Optimization Henry Neeman,
Supercomputing in Plain English Instruction Level Parallelism PRESENTERNAME PRESENTERTITLE PRESENTERDEPARTMENT PRESENTERINSTITUTION DAY MONTH DATE YEAR.
Chapter 25: Code-Tuning Strategies. Chapter 25  Code tuning is one way of improving a program’s performance, You can often find other ways to improve.
RISC architecture and instruction Level Parallelism (ILP) based on “Computer Architecture: a Quantitative Approach” by Hennessy and Patterson, Morgan Kaufmann.
1 Code optimization “Code optimization refers to the techniques used by the compiler to improve the execution efficiency of the generated object code”
Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
High-Level Transformations for Embedded Computing
Parallel Programming & Cluster Computing Stupid Compiler Tricks Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education.
Parallel & Cluster Computing Stupid Compiler Tricks Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08.
Introduction to Parallel Programming & Cluster Computing Stupid Compiler Tricks Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew.
Parallel & Cluster Computing Instruction Level Parallelism Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy,
Supercomputing in Plain English Instruction Level Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma.
Supercomputing in Plain English Part IV: Stupid Compiler Tricks Henry Neeman, Director OU Supercomputing Center for Education & Research University of.
Supercomputing in Plain English Stupid Compiler Tricks Henry Neeman, Director OU Supercomputing Center for Education & Research Blue Waters Undergraduate.
Parallel Programming & Cluster Computing Stupid Compiler Tricks Henry Neeman, Director OU Supercomputing Center for Education & Research University of.
Optimization of C Code The C for Speed
3/12/2013Computer Engg, IIT(BHU)1 CONCEPTS-1. Pipelining Pipelining is used to increase the speed of processing It uses temporal parallelism In pipelining,
Parallel Programming & Cluster Computing Instruction Level Parallelism Dan Ernst Andrew Fitz Gibbon Tom Murphy Henry Neeman Charlie Peck Stephen Providence.
Supercomputing in Plain English Instruction Level Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research Blue Waters Undergraduate.
3/2/2016© Hal Perkins & UW CSES-1 CSE P 501 – Compilers Optimizing Transformations Hal Perkins Autumn 2009.
CS412/413 Introduction to Compilers and Translators April 2, 1999 Lecture 24: Introduction to Optimization.
Supercomputing in Plain English An Introduction to High Performance Computing Part IV:Stupid Compiler Tricks Henry Neeman, Director OU Supercomputing Center.
PYTHON FOR HIGH PERFORMANCE COMPUTING. OUTLINE  Compiling for performance  Native ways for performance  Generator  Examples.
Parallel & Cluster Computing 2005 Instruction Level Parallelism Paul Gray, University of Northern Iowa David Joiner, Kean University Tom Murphy, Contra.
©SoftMoore ConsultingSlide 1 Code Optimization. ©SoftMoore ConsultingSlide 2 Code Optimization Code generation techniques and transformations that result.
Optimization. How to Optimize Code Conventional Wisdom: 1.Don't do it 2.(For experts only) Don't do it yet.
Supercomputing in Plain English Supercomputing in Plain English Stupid Compiler Tricks Henry Neeman, Director OU Supercomputing Center for Education &
Code Optimization Code produced by compilation algorithms can often be improved (ideally optimized) in terms of run-time speed and the amount of memory.
Code Optimization Overview and Examples
Code Optimization.
Supercomputing and Science
Computer Architecture Principles Dr. Mike Frank
Simone Campanoni Loop transformations Simone Campanoni
Optimization Code Optimization ©SoftMoore Consulting.
Parallel Programming & Cluster Computing Stupid Compiler Tricks
Supercomputing in Plain English Stupid Compiler Tricks
Supercomputing in Plain English Stupid Compiler Tricks
Code Optimization Overview and Examples Control Flow Graph
Supercomputing in Plain English
CMPE 152: Compiler Design April 30 Class Meeting
Presentation transcript:

Supercomputing and Science An Introduction to High Performance Computing Part IV: Dependency Analysis and Stupid Compiler Tricks Henry Neeman, Director OU Supercomputing Center for Education & Research

OU Supercomputing Center for Education & Research2 Outline Dependency Analysis What is Dependency Analysis? Control Dependencies Data Dependencies Stupid Compiler Tricks Tricks the Compiler Plays Tricks You Play With the Compiler Profiling

Dependency Analysis

OU Supercomputing Center for Education & Research4 What Is Dependency Analysis? Dependency analysis is the determination of how different parts of a program interact, and how various parts require other parts in order to operate correctly. A control dependency governs how different routines or sets of instructions affect each other. A data dependency governs how different pieces of data affect each other. Much of this discussion is from reference [1].

OU Supercomputing Center for Education & Research5 Control Dependencies Every program has a well-defined flow of control. This flow can be affected by several kinds of operations: Loops Branches (if, select case/switch) Function/subroutine calls I/O (typically implemented as calls) Dependencies affect parallelization!

OU Supercomputing Center for Education & Research6 Branch Dependency Example y = 7 IF (x /= 0) THEN y = 1.0 / x END IF !! (x /= 0) The value of y depends on what the condition (x /= 0) evaluates to: If the condition evaluates to.TRUE., then y is set to 1.0/x. Otherwise, y remains 7.

OU Supercomputing Center for Education & Research7 Loop Dependency Example DO index = 2, length a(index) = a(index-1) + b(index) END DO !! index = 2, length Here, each iteration of the loop depends on the previous iteration. That is, the iteration i=3 depends on iteration i=2, iteration i=4 depends on i=3, iteration i=5 depends on i=4, etc. This is sometimes called a loop carried dependency.

OU Supercomputing Center for Education & Research8 Why Do We Care? Loops are the favorite control structures of High Performance Computing, because compilers know how to optimize them using instruction-level parallelism: superscalar and pipelining give excellent speedup. Loop-carried dependencies affect whether a loop can be parallelized, and how much.

OU Supercomputing Center for Education & Research9 Loop or Branch Dependency? Is this a loop carried dependency or an IF dependency? DO index = 1, length IF (x(index) /= 0) THEN y(index) = 1.0 / x(index) END IF !! (x(index) /= 0) END DO !! index = 1, length

OU Supercomputing Center for Education & Research10 Call Dependency Example x = 5 y = myfunction(7) z = 22 The flow of the program is interrupted by the call to myfunction, which takes the execution to somewhere else in the program.

OU Supercomputing Center for Education & Research11 I/O Dependency X = a + b PRINT *, x Y = c + d Typically, I/O is implemented by implied subroutine calls, so we can think of this as a call dependency.

OU Supercomputing Center for Education & Research12 Reductions sum = 0 DO index = 1, length sum = sum + array(index) END DO !! index = 1, length Other kinds of reductions: product,.AND.,.OR., minimum, maximum, number of occurrences, etc. Reductions are so common that hardware and compilers are optimized to handle them.

OU Supercomputing Center for Education & Research13 Data Dependencies a = x + y + COS(z) b = a * c The value of b depends on the value of a, so these two statements must be executed in order.

OU Supercomputing Center for Education & Research14 Output Dependencies x = a / b y = x + 2 x = d - e Notice that x is assigned two different values, but only one of them is retained after these statements. In this context, the final value of x is the “output.” Again, we are forced to execute in order.

OU Supercomputing Center for Education & Research15 Why Does Order Matter? Dependencies can affect whether we can execute a particular part of the program in parallel. SLOW If we cannot execute that part of the program in parallel, then it’ll be SLOW.

OU Supercomputing Center for Education & Research16 Loop Dependency Example if ((dst == src1) && (dst == src2)) { for (index = 1; index < length; index++) { dst[index] = dst[index-1] + dst[index]; } /* for index */ } /* if ((dst == src1) && (dst == src2)) */ else if (dst == src1) { for (index = 1; index < length; index++) { dst[index] = dst[index-1] + src2[index]; } /* for index */ } /* if (dst == src1) */ …

OU Supercomputing Center for Education & Research17 Loop Dep Example (cont’d) … else { for (index = 1; index < length; index++) { dst[index] = src1[index-1]+src2[index]; } /* for index */ } /* if (dst == src2)...else */ The various versions of the loop either do have loop-carried dependencies or do not.

OU Supercomputing Center for Education & Research18 Loop Dep Performance

Stupid Compiler Tricks

OU Supercomputing Center for Education & Research20 Stupid Compiler Tricks Tricks Compilers Play Scalar Optimizations Loop Optimizations Inlining Tricks You Can Play with Compilers

OU Supercomputing Center for Education & Research21 Compiler Design The people who design compilers have a lot of experience working with the languages commonly used in High Performance Computing: Fortran: 40ish years C: 30ish years C++: 15ish years, plus C experience So, they’ve come up with clever ways to make programs run faster.

Tricks Compilers Play

OU Supercomputing Center for Education & Research23 Scalar Optimizations Copy Propagation Constant Folding Dead Code Removal Strength Reduction Common Subexpression Elimination Variable Renaming Not every compiler does all of these, so it sometimes can be worth doing these by hand. Much of this discussion is from [2].

OU Supercomputing Center for Education & Research24 Copy Propagation x = y z = 1 + x x = y z = 1 + y Has data dependency No data dependency Compile Before After

OU Supercomputing Center for Education & Research25 Constant Folding add = 100 aug = 200 sum = add + aug Notice that sum is actually the sum of two constants, so the compiler can precalculate it, eliminating the addition that otherwise would be performed at runtime. sum = 300 Before After

OU Supercomputing Center for Education & Research26 Dead Code Removal var = 5 PRINT *, var STOP PRINT *, var * 2 Since the last statement never executes, the compiler can eliminate it. var = 5 PRINT *, var STOP BeforeAfter

OU Supercomputing Center for Education & Research27 Strength Reduction x = y ** 2.0 a = c / 2.0 x = y * y a = c * 0.5 BeforeAfter Raising one value to the power of another, or dividing, is more expensive than multiplying. If the compiler can tell that the power is a small integer, or that the denominator is a constant, it’ll use multiplication instead.

OU Supercomputing Center for Education & Research28 Common Subexpressions d = c*(a+b) e = (a+b)*2.0 aplusb = a + b d = c*aplusb e = aplusb*2.0 BeforeAfter The subexpression (a+b) occurs in both assignment statements, so there’s no point in calculating it twice.

OU Supercomputing Center for Education & Research29 Variable Renaming x = y * z q = r + x * 2 x = a + b x0 = y * z q = r + x0 * 2 x = a + b BeforeAfter The original code has an output dependency, while the new code doesn’t – but the final value of x is still correct.

OU Supercomputing Center for Education & Research30 Loop Optimizations Hoisting and Sinking Induction Variable Simplification Iteration Peeling Loop Interchange Unrolling Not every compiler does all of these, so it sometimes can be worth doing these by hand. Much of this discussion is from [3].

OU Supercomputing Center for Education & Research31 Hoisting and Sinking DO i = 1, n a(i) = b(i) + c * d e = g(n) END DO !! i = 1, n Before temp = c * d DO i = 1, n a(i) = b(i) + temp END DO !! i = 1, n e = g(n) After Sink Hoist Code that doesn’t change inside the loop is called loop invariant.

OU Supercomputing Center for Education & Research32 Induction Variable Simplifying DO i = 1, n k = i*4+m … END DO k = m DO i = 1, n k = k + 4 … END DO Before After One operation can be cheaper than two. On the other hand, this strategy can create a new dependency.

OU Supercomputing Center for Education & Research33 Iteration Peeling DO i = 1, n IF ((i == 1).OR. (i == n)) THEN x(i) = y(i) ELSE x(i) = y(i + 1) + y(i – 1) END IF END DO x(1) = y(1) DO i = 1, n x(i) = y(i + 1) + y(i – 1) END DO x(n) = y(n) Before After

OU Supercomputing Center for Education & Research34 Loop Interchange DO i = 1, ni DO j = 1, nj a(i,j) = b(i,j) END DO !! j END DO !! i DO j = 1, nj DO i = 1, ni a(i,j) = b(i,j) END DO !! i END DO !! j Array elements a(i,j) and a(i+1,j) are near each other in memory, while a(i,j+1) may be far, so it makes sense to make the i loop be the inner loop. Before After

OU Supercomputing Center for Education & Research35 Unrolling DO i = 1, n a(i) = a(i)+b(i) END DO !! i DO i = 1, n, 4 a(i) = a(i)+b(i) a(i+1) = a(i+1)+b(i+1) a(i+2) = a(i+2)+b(i+2) a(i+3) = a(i+3)+b(i+3) END DO !! i Before After You generally shouldn’t unroll by hand.

OU Supercomputing Center for Education & Research36 Why Do Compilers Unroll? We saw last time that a loop with a lot of operations gets better performance (up to some point), especially if there are lots of arithmetic operations but few main memory loads and stores. Unrolling creates multiple operations that typically load from the same, or adjacent, cache lines. So, an unrolled loop has more operations without increasing the memory accesses much.

OU Supercomputing Center for Education & Research37 Inlining DO i = 1, n a(i) = func(i) END DO … REAL FUNCTION func (x) … func = x * 3 END FUNCTION func DO i = 1, n a(i) = i * 3 END DO BeforeAfter When a function or subroutine is inlined, its contents are transferred directly into the calling routine, eliminating the overhead of making the call.

Tricks You Can Play with Compilers

OU Supercomputing Center for Education & Research39 The Joy of Compiler Options Every compiler has a different set of options that you can set. Among these are options that control single processor optimization: superscalar, pipelining, vectorization, scalar optimizations, loop optimizations, inlining and so on.

OU Supercomputing Center for Education & Research40 Example Compile Lines SGI Origin2000 f90 –Ofast –ipa Sun UltraSPARC f90 –fast CrayJ90 f90 –O 3,aggress,pattern,recurrence Portland Group f90 pgf90 -O2 –Mdalign –Mvect=assoc NAG f95 f95 –O4 –Ounsafe –ieee=nonstd

OU Supercomputing Center for Education & Research41 What Does the Compiler Do? Example: NAG f95 compiler f95 –O source.f90 Possible levels are –O0, -O1, -O2, -O3, -O4: -O0 No optimisation. … -O1 Minimal quick optimisation. -O2 Normal optimisation. -O3 Further optimisation. -O4 Maximal optimisation. [ 4] The man page is pretty cryptic.

OU Supercomputing Center for Education & Research42 Optimization Performance

OU Supercomputing Center for Education & Research43 More Optimized Performance

Profiling

OU Supercomputing Center for Education & Research45 Profiling Profiling means collecting data about how a program executes. The two major kinds of profiling are: Subroutine profiling Hardware timing

OU Supercomputing Center for Education & Research46 Subroutine Profiling Subroutine profiling means finding out how much time is spent in each routine. Typically, a program spends 90% of its runtime in 10% of the code. Subroutine profiling tells you what parts of the program to spend time optimizing and what parts you can ignore.

OU Supercomputing Center for Education & Research47 Profiling Example On an SGI Origin2000: f90 –Ofast –g3 … The –g3 option tells the compiler to set the executable up to collect profiling information. ssrun –fpcsampx executable This command generates a file named executable.m (the number is the process number of the run).

OU Supercomputing Center for Education & Research48 Profiling Example (cont’d) When the run has complete, ssrun has generated the.m#### file. Then: prof executable.m produces a list of all of the routines and how much time was spent in each.

OU Supercomputing Center for Education & Research49 Profiling Result secs % cum.% samples function % 29.1% complib_sgemm_ % 53.7% SGEMTV_AXPY % 72.9% SGEMV_DOT % 80.3% COMPLIB_SGEMM_HOISTC % 85.3% _sd2uge % 88.2% funds...

OU Supercomputing Center for Education & Research50 Hardware Timing In addition to learning about which routines dominate in your program, you might also want to know how the hardware behaves; e.g., you might want to know how often you get a cache miss. Many supercomputer CPUs have special hardware that measures such events, called event counters.

OU Supercomputing Center for Education & Research51 Hardware Timing Example On SGI Origin2000: perfex –x –a executable This command produces a list of hardware counts.

OU Supercomputing Center for Education & Research52 Hardware Timing Results Cycles Decoded instructions Decoded loads Decoded stores Grad floating point instructions Primary data cache misses Secondary data cache misses

OU Supercomputing Center for Education & Research53 Next Time Part V: Shared Memory Multiprocessing

OU Supercomputing Center for Education & Research54 References [1] Kevin Dowd and Charles Severance, High Performance Computing, 2 nd ed. O’Reilly, 1998, p [2] Ibid, p [3] Ibid, p [4] NAG f95 man page.