Code Optimization II: Machine Dependent Optimizations

Slides:



Advertisements
Similar presentations
Code Optimization and Performance Chapter 5 CS 105 Tour of the Black Holes of Computing.
Advertisements

Carnegie Mellon Today Program optimization  Optimization blocker: Memory aliasing  Out of order processing: Instruction level parallelism  Understanding.
Instructor: Erol Sahin Program Optimization CENG331: Introduction to Computer Systems 11 th Lecture Acknowledgement: Most of the slides are adapted from.
Program Optimization (Chapter 5)
1 Code Optimization(II). 2 Outline Understanding Modern Processor –Super-scalar –Out-of –order execution Suggested reading –5.14,5.7.
1 Seoul National University Wrap-Up. 2 Overview Seoul National University Wrap-Up of PIPE Design  Exception conditions  Performance analysis Modern.
Limits on ILP. Achieving Parallelism Techniques – Scoreboarding / Tomasulo’s Algorithm – Pipelining – Speculation – Branch Prediction But how much more.
EECC551 - Shaaban #1 Fall 2005 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
Code Optimization II Feb. 19, 2008 Topics Machine Dependent Optimizations Understanding Processor Operations Branches and Branch Prediction class11.ppt.
Code Optimization: Machine Independent Optimizations Feb 12, 2004 Topics Machine-Independent Optimizations Machine-Dependent Opts Understanding Processor.
Code Optimization II September 26, 2007 Topics Machine Dependent Optimizations Understanding Processor Operations Branches and Branch Prediction class09.ppt.
Code Optimization II: Machine Dependent Optimizations Topics Machine-Dependent Optimizations Pointer code Unrolling Enabling instruction level parallelism.
Architecture Basics ECE 454 Computer Systems Programming
1 Code Optimization. 2 Outline Machine-Independent Optimization –Code motion –Memory optimization Suggested reading –5.2 ~ 5.6.
Recitation 7: 10/21/02 Outline Program Optimization –Machine Independent –Machine Dependent Loop Unrolling Blocking Annie Luo
University of Amsterdam Computer Systems – optimizing program performance Arnoud Visser 1 Computer Systems Optimizing program performance.
Assembly תרגול 5 תכנות באסמבלי. Assembly vs. Higher level languages There are NO variables’ type definitions.  All kinds of data are stored in the same.
1 Code Optimization. 2 Outline Optimizing Blockers –Memory alias –Side effect in function call Understanding Modern Processor –Super-scalar –Out-of –order.
1 Code Optimization. 2 Outline Optimizing Blockers –Memory alias –Side effect in function call Understanding Modern Processor –Super-scalar –Out-of –order.
Machine-Dependent Optimization CS 105 “Tour of the Black Holes of Computing”
University of Amsterdam Computer Systems – optimizing program performance Arnoud Visser 1 Computer Systems Optimizing program performance.
Code Optimization and Performance Chapters 5 and 9 perf01.ppt CS 105 “Tour of the Black Holes of Computing”
Computer Architecture: Wrap-up CENG331 - Computer Organization Instructors: Murat Manguoglu(Section 1) Erol Sahin (Section 2 & 3) Adapted from slides of.
Code Optimization II: Machine Dependent Optimization Topics Machine-Dependent Optimizations Unrolling Enabling instruction level parallelism.
Machine Independent Optimizations Topics Code motion Reduction in strength Common subexpression sharing.
Machine-Dependent Optimization CS 105 “Tour of the Black Holes of Computing”
1 Code Optimization. 2 Outline Machine-Independent Optimization –Code motion –Memory optimization Suggested reading –5.2 ~ 5.6.
Real-World Pipelines Idea –Divide process into independent stages –Move objects through stages in sequence –At any given times, multiple objects being.
Code Optimization and Performance II
Real-World Pipelines Idea Divide process into independent stages
CS 352H: Computer Systems Architecture
Reading Condition Codes (Cont.)
Machine-Level Programming 2 Control Flow
Assembly language.
Code Optimization II September 27, 2006
CS 3214 Introduction to Computer Systems
IA32 Processors Evolutionary Design
Instruction Level Parallelism
William Stallings Computer Organization and Architecture 8th Edition
Machine-Dependent Optimization
Code Optimization II Dec. 2, 2008
Homework In-line Assembly Code Machine Language
Code Optimization I: Machine Independent Optimizations
Instructors: Dave O’Hallaron, Greg Ganger, and Greg Kesden
Code Optimization II: Machine Dependent Optimizations Oct. 1, 2002
Computer Architecture adapted by Jason Fritts then by David Ferry
Y86 Processor State Program Registers
Machine-Dependent Optimization
Seoul National University
Lecture 8: ILP and Speculation Contd. Chapter 2, Sections 2. 6, 2
Machine-Level Programming 2 Control Flow
Systems I Pipelining II
Machine-Level Programming 2 Control Flow
Machine-Level Programming III: Procedures Sept 18, 2001
Machine-Level Representation of Programs III
Machine-Level Programming 2 Control Flow
Code Optimization(II)
Code Optimization April 6, 2000
Pipelined Implementation : Part I
Optimizing program performance
Machine-Level Programming II: Control Flow
Systems I Pipelining II
Systems I Pipelining II
02/02/10 20:53 Assembly Questions תרגול 12 1.
Machine-Independent Optimization
Lecture 11: Machine-Dependent Optimization
Code Optimization and Performance
Code Optimization II October 2, 2001
Presentation transcript:

Code Optimization II: Machine Dependent Optimizations Topics Machine-Dependent Optimizations Pointer code Unrolling Enabling instruction level parallelism Understanding Processor Operation Translation of instructions into operations Out-of-order execution Superscaler execution Branches and Branch Prediction

Review of “combine” optimizations

Previous Best Combining Code void combine4(vec_ptr v, int *dest) { int i; int length = vec_length(v); int *data = get_vec_start(v); int sum = 0; for (i = 0; i < length; i++) sum += data[i]; *dest = sum; } Static optimizations so far Code motion on vec_length, removing from loop Direct data access rather than calling get_vec_element Use local variable for optimal register usage CPE at –O2: 31  2 (15x faster!)

Generic Form of combine4 void combine4(vec_ptr v, data_t *dest) { int i; int length = vec_length(v); data_t *data = get_vec_start(v); data_t t = IDENT; for (i = 0; i < length; i++) t = t OPER data[i]; *dest = t; } Data Types Use different declarations for data_t int float double Operations Use different definitions of OPER and IDENT + / 0 * / 1

Machine Independent Results Optimizations Reduce function calls and memory references within loop Performance Anomaly Computing FP product of all elements exceptionally slow. Very large speedup when accumulate in temporary Caused by quirk of IA32 floating point Memory uses 64-bit format, register use 80 Benchmark data caused overflow of 64 bits, but not 80

Pointers vs. Arrays

Pointer Code Optimization Use pointers rather than array references void combine4p(vec_ptr v, int *dest) { int length = vec_length(v); int *data = get_vec_start(v); int *dend = data + length; int sum = 0; while (data < dend) { sum += *data; data++; } *dest = sum; Optimization Use pointers rather than array references CPE: 3.00 vs. 2.00 for array version (Compiled -O2) Oops! We’re not making progress here! Warning: Some compilers do better job optimizing array code

Pointer vs. Array Code Inner Loops Pointer Code Performance Array code = 2 cycles, pointer code = 3 cycles For deeper understanding need in-depth knowledge of processor .L24: # Loop: addl (%eax,%edx,4),%ecx # sum += data[i] incl %edx # i++ cmpl %esi,%edx # i:length jl .L24 # if < goto Loop .L30: # Loop: addl (%eax),%ecx # sum += *data addl $4,%eax # data ++ cmpl %edx,%eax # data:dend jb .L30 # if < goto Loop

CPU Design

Classic CPU Design Fetch-Decode-Execute cycle Pipelining Superscaler Fetch the instruction from memory Decode the instruction Execute the decoded instruction Pipelining Assembly line model Processor split into functional units Functional units can work in parallel Overlap fetch-decode-execute cycles Superscaler Execute multiple operations/cycle Out-of-order execution Re-order instructions for efficiency Dependencies must be observed

Modern CPU Design Instruction Control Execution Address Instrs. Fetch Control Instruction Cache Retirement Unit Instrs. Register File Instruction Decode Operations Register Updates Prediction OK? Execution Functional Units Integer/ Branch General Integer FP Add FP Mult/Div Load Store Operation Results Addr. Addr. Data Data Data Cache

CPU Capabilities of Pentium III Multiple Instructions Can Execute in Parallel 1 load 1 store 1 integer + branch 1 integer + multiply/divide 1 FP Addition 1 FP Multiplication or Division Some Instructions Take > 1 Cycle, but Can be Pipelined Instruction Latency Cycles/Issue Load / Store 3 1 Integer Multiply 4 1 Integer Divide 36 36 Double/Single FP Multiply 5 2 Double/Single FP Add 3 1 Double/Single FP Divide 38 38

Instruction Control Unit Address Fetch Control Instruction Cache Retirement Unit Instrs. Register File Instruction Decode Operations Fetches Instruction Bytes From Memory Based on current PC + targets for predicted branches Hardware dynamically guesses whether branches taken/not taken and (possibly) branch target Decodes Instructions Into Operations Primitive steps required to perform instruction Typical instruction requires 1–3 operations Converts Register References Into Tags Abstract identifier linking destination of one operation with sources of later operations

Translation Example Version of Combine4 Translation of First Iteration Note tags. Registers eax and esi are not tagged. Why? Lets examine instruction imull in detail... .L24: # Loop: imull (%eax,%edx,4),%ecx # t *= data[i] incl %edx # i++ cmpl %esi,%edx # i:length jl .L24 # if < goto Loop .L24: imull (%eax,%edx,4),%ecx incl %edx cmpl %esi,%edx jl .L24 load (%eax,%edx.0,4)  t.1 imull t.1, %ecx.0  %ecx.1 incl %edx.0  %edx.1 cmpl %esi, %edx.1  cc.1 jl-taken cc.1

Translation Example - imull imull (%eax,%edx,4),%ecx incl %edx cmpl %esi,%edx jl .L24 load (%eax,%edx.0,4)  t.1 imull t.1, %ecx.0  %ecx.1 incl %edx.0  %edx.1 cmpl %esi, %edx.1  cc.1 jl-taken cc.1 Split into two operations load reads from memory to generate temporary result t.1 Multiply operation just operates on registers Operands Registers %eax does not change in loop. Values will be retrieved from register file during decoding Register %ecx changes on every iteration. Uniquely identify different versions as %ecx.0, %ecx.1, %ecx.2, … Register renaming Values passed directly from producer (load) to consumer (general integer functional unit)

Modern CPU Design Instruction Control Execution Address Instrs. Fetch Control Instruction Cache Retirement Unit Instrs. Register File Instruction Decode Operations Register Updates Prediction OK? Execution Functional Units Integer/ Branch General Integer FP Add FP Mult/Div Load Store Operation Results Addr. Addr. Data Data Data Cache

Translation Example - incl imull (%eax,%edx,4),%ecx incl %edx cmpl %esi,%edx jl .L24 load (%eax,%edx.0,4)  t.1 imull t.1, %ecx.0  %ecx.1 incl %edx.0  %edx.1 cmpl %esi, %edx.1  cc.1 jl-taken cc.1 Register %edx changes on each iteration. Rename as %edx.0, %edx.1, %edx.2, …

Translation Example - cmpl imull (%eax,%edx,4),%ecx incl %edx cmpl %esi,%edx jl .L24 load (%eax,%edx.0,4)  t.1 imull t.1, %ecx.0  %ecx.1 incl %edx.0  %edx.1 cmpl %esi, %edx.1  cc.1 jl-taken cc.1 Condition codes are treated similar to registers Assign tag to define connection between producer and consumer

Translation Example - jmp imull (%eax,%edx,4),%ecx incl %edx cmpl %esi,%edx jl .L24 load (%eax,%edx.0,4)  t.1 imull t.1, %ecx.0  %ecx.1 incl %edx.0  %edx.1 cmpl %esi, %edx.1  cc.1 jl-taken cc.1 Instruction control unit determines destination of jump Predicts if jump will be taken and target of jump Starts fetching instruction at predicted destination Execution unit simply checks whether or not prediction was OK If not, it signals instruction control Instruction control then “invalidates” any operations generated from misfetched instructions Begins fetching and decoding instructions at correct target More in-depth discussion later in lecture

Modern CPU Design Instruction Control Execution Address Instrs. Fetch Control Instruction Cache Retirement Unit Instrs. Register File Instruction Decode Operations Register Updates Prediction OK? Execution Functional Units Integer/ Branch General Integer FP Add FP Mult/Div Load Store Operation Results Addr. Addr. Data Data Data Cache

Visualizing CPU Operations

Visualizing Operations (p. 403) cc.1 t.1 load %ecx.1 incl cmpl jl %edx.0 %edx.1 %ecx.0 imull load (%eax,%edx.0,4)  t.1 imull t.1, %ecx.0  %ecx.1 incl %edx.0  %edx.1 cmpl %esi, %edx.1  cc.1 jl-taken cc.1 Operations Vertical position denotes time at which executed Cannot begin operation until operands available Height denotes latency What’s issue time? Operands Arcs shown only for operands that are passed within execution unit Time Latency load = 3 imull = 4 incl/cmpl/jl = 1

Visualizing Operations (p. 403) load (%eax,%edx,4)  t.1 iaddl t.1, %ecx.0  %ecx.1 incl %edx.0  %edx.1 cmpl %esi, %edx.1  cc.1 jl-taken cc.1 cc.1 t.1 %ecx.i +1 incl cmpl jl load %edx.0 %edx.1 %ecx.0 addl %ecx.1 Time Operations Same as before, except that add has latency of 1

3 Iterations of Combining Product (p. 404) Assume unlimited resources Operation can start as soon as operands available Operations for multiple iterations overlap in time Performance Limiting factor becomes latency of integer multiplier Gives CPE of 4.0

Sum With Unlimited Resources (p. 405) 4 integer ops Performance Can begin a new iteration on each clock cycle CPE = 1.0 Requires 4 integer functional units Superscaler (more than 1 operation/cycle)

Sum With Resource Constraints Two integer functional units Both utilized on each clock cycle Delays (incl delays 4th load) CPE = 2.0

Loop Unrolling

Loop Unrolling (p. 409) Optimization void combine5(vec_ptr v, int *dest) { int length = vec_length(v); int limit = length-2; int *data = get_vec_start(v); int sum = 0; int i; /* Combine 3 elements at a time */ for (i = 0; i < limit; i+=3) { sum += data[i] + data[i+2] + data[i+1]; } /* Finish any remaining elements */ for (; i < length; i++) { sum += data[i]; *dest = sum; Optimization Combine multiple iterations into single loop body Amortizes loop overhead across multiple iterations Finish extras at end

Visualizing Unrolled Loop (p. 410) %edx.0 %edx.1 %ecx.0c cc.1 t.1a %ecx.i +1 addl cmpl jl %ecx.1c t.1b t.1c %ecx.1a %ecx.1b load Loads can pipeline, since don’t have dependencies Only one set of loop control operations Time load (%eax,%edx.0,4)  t.1a iaddl t.1a, %ecx.0c  %ecx.1a load 4(%eax,%edx.0,4)  t.1b iaddl t.1b, %ecx.1a  %ecx.1b load 8(%eax,%edx.0,4)  t.1c iaddl t.1c, %ecx.1b  %ecx.1c iaddl $3,%edx.0  %edx.1 cmpl %esi, %edx.1  cc.1 jl-taken cc.1

Executing with Loop Unrolling (p. 411) Performance Completes one iteration (computes 3 values) in 3 cycles Predicted CPE should be 1.00 Measured CPE is 1.33, or one iteration every 4 cycles Unknown factors

Effect of Unrolling Only helps integer sum for our examples Unrolling Degree 1 2 3 4 8 16 Integer Sum 2.00 1.50 1.33 1.25 1.06 Product 4.00 (Latency = 4) FP 3.00 (Latency = 3) 5.00 (Latency = 5) Only helps integer sum for our examples Other cases constrained by functional unit latencies Integer sum has a latency of 1 Effect is nonlinear with degree of unrolling Many subtle effects determine exact scheduling of operations Loop unrolling is used in qsort

Parallel Computation

Serial Computation Computation Performance * 1 x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 Serial Computation Computation ((((((((((((1 * x0) * x1) * x2) * x3) * x4) * x5) * x6) * x7) * x8) * x9) * x10) * x11) Performance N elements, D cycles/operation N*D cycles

Dual Product Computation ((((((1 * x0) * x2) * x4) * x6) * x8) * x10) * ((((((1 * x1) * x3) * x5) * x7) * x9) * x11) Performance N elements, D cycles/operation (N/2+1)*D cycles ~2X performance improvement * 1 x0 x2 x4 x6 x8 x10 * 1 x1 x3 x5 x7 x9 x11 *

Parallel Loop Unrolling void combine6(vec_ptr v, int *dest) { int length = vec_length(v); int limit = length-1; int *data = get_vec_start(v); int x0 = 1; int x1 = 1; int i; /* Combine 2 elements at a time */ for (i = 0; i < limit; i+=2) { x0 *= data[i]; x1 *= data[i+1]; } /* Finish any remaining elements */ for (; i < length; i++) { *dest = x0 * x1; Optimization Accumulate in two different products Can overlap multiplies Latency 4, Issue 1 Combine at end Performance CPE = 4.0  2.0 2X performance

Parallel Loop Unrolling void combine6aa(vec_ptr v, int *dest) { int length = vec_length(v); int limit = length-1; int *data = get_vec_start(v); int x = 1; int i; /* Combine 2 elements at a time */ for (i = 0; i < limit; i+=2) { x *= (data[i] * data[i+1]); } /* Finish any remaining elements */ for (; i < length; i++) { x *= data[i]; *dest = x; Optimization Multiply pairs of elements together And then update product “Tree height reduction” Performance CPE = 2.5

Understanding Parallelism /* Combine 2 elements at a time */ for (i = 0; i < limit; i+=2) { x = (x * data[i]) * data[i+1]; } CPE = 4.00 All multiplies perfomed in sequence /* Combine 2 elements at a time */ for (i = 0; i < limit; i+=2) { x = x * (data[i] * data[i+1]); } CPE = 2.50 Multiplies overlap

Requirements for Parallel Computation Mathematical Combining operation must be associative & commutative OK for integer multiplication Not strictly true for floating point OK for most applications Hardware Pipelined functional units Ability to dynamically extract parallelism from code

Visualizing Parallel Loop Two multiplies within loop no longer have data depency Allows them to pipeline Time %edx.1 %ecx.0 %ebx.0 cc.1 t.1a imull %ecx.1 addl cmpl jl %edx.0 %ebx.1 t.1b load load (%eax,%edx.0,4)  t.1a imull t.1a, %ecx.0  %ecx.1 load 4(%eax,%edx.0,4)  t.1b imull t.1b, %ebx.0  %ebx.1 iaddl $2,%edx.0  %edx.1 cmpl %esi, %edx.1  cc.1 jl-taken cc.1

Executing with Parallel Loop Predicted Performance, two different products Keep 4-cycle multiplier busy performing two simultaneous multiplications Gives CPE of 2.0

Limitations of Parallel Execution Need Lots of Registers To hold sums/products Only 6 usable integer registers Also needed for pointers, loop conditions 8 FP registers When not enough registers, must spill temporaries onto stack Wipes out any performance gains Not helped by renaming Cannot reference more operands than instruction set allows Major drawback of IA32 instruction set

Optimization Results for Combining

Branches and Branch Prediction

What About Branches? Challenge Instruction Control Unit must work well ahead of Exec. Unit To generate enough operations to keep EU busy When encounters conditional branch, cannot reliably determine where to continue fetching 80489f3: movl $0x1,%ecx 80489f8: xorl %edx,%edx 80489fa: cmpl %esi,%edx 80489fc: jnl 8048a25 80489fe: movl %esi,%esi 8048a00: imull (%eax,%edx,4),%ecx Executing Fetching & Decoding

Branch Outcomes There are two possibilities Branch Taken: Transfer control to branch target Branch Not-Taken: Continue with next instruction in sequence Cannot resolve until outcome determined by branch/integer unit 80489f3: movl $0x1,%ecx 80489f8: xorl %edx,%edx 80489fa: cmpl %esi,%edx 80489fc: jnl 8048a25 80489fe: movl %esi,%esi 8048a00: imull (%eax,%edx,4),%ecx Branch Not-Taken Branch Taken 8048a25: cmpl %edi,%edx 8048a27: jl 8048a20 8048a29: movl 0xc(%ebp),%eax 8048a2c: leal 0xffffffe8(%ebp),%esp 8048a2f: movl %ecx,(%eax)

Branch Prediction Idea Guess which way branch will go Begin executing instructions at predicted position Speculative execution But don’t actually modify register or memory data 80489f3: movl $0x1,%ecx 80489f8: xorl %edx,%edx 80489fa: cmpl %esi,%edx 80489fc: jnl 8048a25 . . . Predict Taken 8048a25: cmpl %edi,%edx 8048a27: jl 8048a20 8048a29: movl 0xc(%ebp),%eax 8048a2c: leal 0xffffffe8(%ebp),%esp 8048a2f: movl %ecx,(%eax) Execute

Modern CPU Design Instruction Control Execution Address Instrs. Fetch Control Instruction Cache Retirement Unit Instrs. Register File Instruction Decode Operations Register Updates Prediction OK? Execution Functional Units Integer/ Branch General Integer FP Add FP Mult/Div Load Store Operation Results Addr. Addr. Data Data Data Cache

Branch Prediction Through Loop 80488b1: movl (%ecx,%edx,4),%eax 80488b4: addl %eax,(%edi) 80488b6: incl %edx 80488b7: cmpl %esi,%edx 80488b9: jl 80488b1 Assume vector length = 100 i = 98 Predict Taken (OK) 80488b1: movl (%ecx,%edx,4),%eax 80488b4: addl %eax,(%edi) 80488b6: incl %edx 80488b7: cmpl %esi,%edx 80488b9: jl 80488b1 i = 99 Predict Taken (Oops) Executed 80488b1: movl (%ecx,%edx,4),%eax 80488b4: addl %eax,(%edi) 80488b6: incl %edx 80488b7: cmpl %esi,%edx 80488b9: jl 80488b1 i = 100 Fetched 80488b1: movl (%ecx,%edx,4),%eax 80488b4: addl %eax,(%edi) 80488b6: incl %edx 80488b7: cmpl %esi,%edx 80488b9: jl 80488b1 i = 101

Branch Misprediction Invalidation 80488b1: movl (%ecx,%edx,4),%eax 80488b4: addl %eax,(%edi) 80488b6: incl %edx 80488b7: cmpl %esi,%edx 80488b9: jl 80488b1 Assume vector length = 100 i = 98 Predict Taken (OK) 80488b1: movl (%ecx,%edx,4),%eax 80488b4: addl %eax,(%edi) 80488b6: incl %edx 80488b7: cmpl %esi,%edx 80488b9: jl 80488b1 i = 99 Predict Taken (Oops) 80488b1: movl (%ecx,%edx,4),%eax 80488b4: addl %eax,(%edi) 80488b6: incl %edx 80488b7: cmpl %esi,%edx 80488b9: jl 80488b1 i = 100 Invalidate 80488b1: movl (%ecx,%edx,4),%eax 80488b4: addl %eax,(%edi) 80488b6: incl %edx i = 101

Branch Misprediction Recovery 80488b1: movl (%ecx,%edx,4),%eax 80488b4: addl %eax,(%edi) 80488b6: incl %edx 80488b7: cmpl %esi,%edx 80488b9: jl 80488b1 Assume vector length = 100 i = 98 Predict Taken (OK) 80488b1: movl (%ecx,%edx,4),%eax 80488b4: addl %eax,(%edi) 80488b6: incl %edx 80488b7: cmpl %esi,%edx 80488b9: jl 80488b1 80488bb: leal 0xffffffe8(%ebp),%esp 80488be: popl %ebx 80488bf: popl %esi 80488c0: popl %edi i = 99 Start over here Performance Cost Branch penalty on Pentium III wastes ~14 clock cycles That’s a lot of time on a high performance processor

Avoiding Branches On Modern Processor, Branches Very Expensive Example Unless prediction can be reliable When possible, best to avoid branches altogether Example Compute maximum of two values 14 cycles when prediction correct 29 cycles when incorrect Average = 22 cycles movl 12(%ebp),%edx # Get y movl 8(%ebp),%eax # rval=x cmpl %edx,%eax # rval:y jge L11 # skip when >= movl %edx,%eax # rval=y L11: int max(int x, int y) { return (x < y) ? y : x; }

Avoiding Branches with Bit Tricks Use masking rather than conditionals Compiler still uses conditional 16 cycles when predict correctly 32 cycles when mispredict Worse than original 14-29 cycles int bmax(int x, int y) { int mask = -(x > y); return (mask & x) | (~mask & y); } xorl %edx,%edx # mask = 0 movl 8(%ebp),%eax movl 12(%ebp),%ecx cmpl %ecx,%eax jle L13 # skip if x <= y movl $-1,%edx # mask = -1 L13:

Avoiding Branches with Bit Tricks Force compiler to generate desired code volatile declaration forces value to be written to memory Compiler must therefore generate code to compute t Fortunately, it treats a register as memory! Instructions setg/movzbl replace cmpl/jle Not very elegant! A hack to get control over compiler 22 clock cycles on all data Better than misprediction (14 - 29 cycles, 29 worst-case) int bvmax(int x, int y) { volatile int t = (x > y); int mask = -t; return (mask & x) | (~mask & y); } movl 8(%ebp),%ecx # Get x movl 12(%ebp),%edx # Get y cmpl %edx,%ecx # x:y setg %al # (x > y) movzbl %al,%eax # Zero extend movl %eax,-4(%ebp) # Save as t movl -4(%ebp),%eax # Retrieve t

Conditional Move Added with P6 microarchitecture (Pentium Pro) cmovXXl %edx,%eax If condition XX holds, copy %edx to %eax Instruction cmovll moves if cond is < and moves a long Doesn’t involve any branching Handled as an operation within Execution Unit Performance 14 cycles on all data movl 8(%ebp),%edx # Get x movl 12(%ebp),%eax # Get y cmpl %edx, %eax # y:x cmovll %edx,%eax # If <, y = x

Summary Pointer Code Loop Unrolling Look carefully at generated code to see whether helpful Loop Unrolling Some compilers do this automatically (-O3) Generally not as clever as what can achieve by hand Exposing Instruction-Level Parallelism Very machine dependent Warning: Benefits depend heavily on particular machine Best if performed by compiler But GCC on IA32/Linux is not very good Do only for performance-critical parts of code

Role of Programmer Don’t: Smash Code into Oblivion Do: Difficult to read, maintain, & assure correctness Do: Select best algorithm Write code that’s readable & maintainable Use functions and recursion Even though these factors can slow down code Eliminate optimization blockers Aliasing, loop-invariant code, local variables Let compiler to do its job Focus on Inner Loops Execute repeatedly Most performance gain