ENGS 116 Lecture 101 ILP: Software Approaches Vincent H. Berk October 12 th Reading for today: 3.7-3.9, 4.1 Reading for Friday: 4.2 – 4.6 Homework #2:

Slides:



Advertisements
Similar presentations
Instruction-level Parallelism Compiler Perspectives on Code Movement dependencies are a property of code, whether or not it is a HW hazard depends on.
Advertisements

CS 378 Programming for Performance Single-Thread Performance: Compiler Scheduling for Pipelines Adopted from Siddhartha Chatterjee Spring 2009.
CPE 731 Advanced Computer Architecture Instruction Level Parallelism Part I Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University.
HW 2 is out! Due 9/25!. CS 6290 Static Exploitation of ILP.
Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW Advanced Computer Architecture COE 501.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Oct 19, 2005 Topic: Instruction-Level Parallelism (Multiple-Issue, Speculation)
Compiler techniques for exposing ILP
1 Lecture 5: Static ILP Basics Topics: loop unrolling, VLIW (Sections 2.1 – 2.2)
CPE 631: ILP, Static Exploitation Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic,
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of.
1 4/20/06 Exploiting Instruction-Level Parallelism with Software Approaches Original by Prof. David A. Patterson.
FTC.W99 1 Advanced Pipelining and Instruction Level Parallelism (ILP) ILP: Overlap execution of unrelated instructions gcc 17% control transfer –5 instructions.
Instruction Level Parallelism María Jesús Garzarán University of Illinois at Urbana-Champaign.
COMP4611 Tutorial 6 Instruction Level Parallelism
1 Lecture: Static ILP Topics: compiler scheduling, loop unrolling, software pipelining (Sections C.5, 3.2)
Eliminating Stalls Using Compiler Support. Instruction Level Parallelism gcc 17% control transfer –5 instructions + 1 branch –Reordering among 5 instructions.
ILP: Loop UnrollingCSCE430/830 Instruction-level parallelism: Loop Unrolling CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng.
EECC551 - Shaaban #1 Fall 2003 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
1 COMP 740: Computer Architecture and Implementation Montek Singh Tue, Feb 24, 2009 Topic: Instruction-Level Parallelism IV (Software Approaches/Compiler.
EEL Advanced Pipelining and Instruction Level Parallelism Lotzi Bölöni.
Computer Architecture Instruction Level Parallelism Dr. Esam Al-Qaralleh.
Dynamic Branch PredictionCS510 Computer ArchitecturesLecture Lecture 10 Dynamic Branch Prediction, Superscalar, VLIW, and Software Pipelining.
CS152 Lec15.1 Advanced Topics in Pipelining Loop Unrolling Super scalar and VLIW Dynamic scheduling.
Pipelining 5. Two Approaches for Multiple Issue Superscalar –Issue a variable number of instructions per clock –Instructions are scheduled either statically.
1 Advanced Computer Architecture Limits to ILP Lecture 3.
Lecture 3: Chapter 2 Instruction Level Parallelism Dr. Eng. Amr T. Abdel-Hamid CSEN 601 Spring 2011 Computer Architecture Text book slides: Computer Architec.
Static Scheduling for ILP Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
CS252 Graduate Computer Architecture Lecture 6 Static Scheduling, Scoreboard February 6 th, 2012 John Kubiatowicz Electrical Engineering and Computer Sciences.
DAP.F96 1 Lecture 4: Hazards, Introduction to Compiler Techniques, Chapter 2.
1 Lecture 10: Static ILP Basics Topics: loop unrolling, static branch prediction, VLIW (Sections 4.1 – 4.4)
1 Lecture: Pipeline Wrap-Up and Static ILP Topics: multi-cycle instructions, precise exceptions, deep pipelines, compiler scheduling, loop unrolling, software.
EECC551 - Shaaban #1 Winter 2002 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
EECC551 - Shaaban #1 Spring 2006 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
EECC551 - Shaaban #1 Fall 2005 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
COMP381 by M. Hamdi 1 Superscalar Processors. COMP381 by M. Hamdi 2 Recall from Pipelining Pipeline CPI = Ideal pipeline CPI + Structural Stalls + Data.
1 Lecture 5: Pipeline Wrap-up, Static ILP Basics Topics: loop unrolling, VLIW (Sections 2.1 – 2.2) Assignment 1 due at the start of class on Thursday.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Oct. 9, 2002 Topic: Instruction-Level Parallelism (Multiple-Issue, Speculation)
Chapter 2 Instruction-Level Parallelism and Its Exploitation
EECC551 - Shaaban #1 Fall 2002 lec# Floating Point/Multicycle Pipelining in MIPS Completion of MIPS EX stage floating point arithmetic operations.
EECC551 - Shaaban #1 lec # 8 Winter Multiple Instruction Issue: CPI < 1 To improve a pipeline’s CPI to be better [less] than one, and to.
CIS 629 Fall 2002 Multiple Issue/Speculation Multiple Instruction Issue: CPI < 1 To improve a pipeline’s CPI to be better [less] than one, and to utilize.
\course\ELEG652-03Fall\Topic Exploitation of Instruction-Level Parallelism (ILP)
EECC551 - Shaaban #1 Winter 2011 lec# Pipelining and Instruction-Level Parallelism (ILP). Definition of basic instruction block Increasing Instruction-Level.
EECC551 - Shaaban #1 Spring 2004 lec# Definition of basic instruction blocks Increasing Instruction-Level Parallelism & Size of Basic Blocks.
1 Instruction Level Parallelism Vincent H. Berk October 15, 2008 Reading for today: A.7 – A.8 Reading for Friday: 2.1 – 2.5 Project Proposals Due Right.
COMP381 by M. Hamdi 1 Loop Level Parallelism Instruction Level Parallelism: Loop Level Parallelism.
ENGS 116 Lecture 91 Dynamic Branch Prediction and Speculation Vincent H. Berk October 10, 2005 Reading for today: Chapter 3.2 – 3.6 Reading for Wednesday:
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
EECC551 - Shaaban #1 Fall 2001 lec# Floating Point/Multicycle Pipelining in DLX Completion of DLX EX stage floating point arithmetic operations.
CIS 662 – Computer Architecture – Fall Class 16 – 11/09/04 1 Compiler Techniques for ILP  So far we have explored dynamic hardware techniques for.
Compiler Techniques for ILP
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue
CSCE430/830 Computer Architecture
CPE 631 Lecture 13: Exploiting ILP with SW Approaches
Lecture: Static ILP Topics: compiler scheduling, loop unrolling, software pipelining (Sections C.5, 3.2)
Lecture: Static ILP Topics: loop unrolling, software pipelines (Sections C.5, 3.2) HW3 posted, due in a week.
Adapted from the slides of Prof
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
CC423: Advanced Computer Architecture ILP: Part V – Multiple Issue
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
CSC3050 – Computer Architecture
Dynamic Hardware Prediction
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
CPE 631 Lecture 14: Exploiting ILP with SW Approaches (2)
CMSC 611: Advanced Computer Architecture
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Lecture 5: Pipeline Wrap-up, Static ILP
Presentation transcript:

ENGS 116 Lecture 101 ILP: Software Approaches Vincent H. Berk October 12 th Reading for today: , 4.1 Reading for Friday: 4.2 – 4.6 Homework #2: due Friday 14 th, 2.8, A.2, A.13, 3.6a&b, 3.10, 4.5, 4.8, (4.13 optional)

ENGS 116 Lecture 102 Basic Loop Unrolling for (i=1000; i>0; i=i-1) x[i] = x[i] + s; Loop:LDF0, 0(R1); F0=array element ADDDF4, F0, F2; add scalar in F2 SD0 (R1), F4; store result SUBIR1, R1, #8; decrement pointer 8 bytes (DW) BNEZR1, Loop; branch R1! = zero NOP; delayed branch slot

ENGS 116 Lecture 103 FP Loop Hazards Where are the stalls? Loop:LDF0, 0(R1); F0=vector element ADDDF4, F0, F2; add scalar in F2 SD0 (R1), F4; store result SUBIR1, R1, #8; decrement pointer 8 bytes (DW) BNEZR1, Loop; branch R1! = zero NOP; delayed branch slot

ENGS 116 Lecture 104 FP Loop Showing Stalls Rewrite code to minimize stalls?

ENGS 116 Lecture 105 Revised FP Loop Minimizing Stalls Can we unroll the loop to make it faster?

ENGS 116 Lecture 106 Loop Unrolling Short loop minimizes parallelism, induces significant overhead Branches per instruction is high Replicate the loop body several times and adjust the loop termination code for (i = 0; i < 100; i = i + 4) { x[i] = x[i] + y[i]; x[i + 1] = x[i + 1] + y[i + 1]; x[i + 2] = x[i + 2] + y[i + 2]; x[i + 3] = x[i + 3] + y[i + 3]; } Improves scheduling since instructions from different iterations can be scheduled together This is done very early in the compilation process All dependences have to be found beforehand Need to use different registers for each iteration

ENGS 116 Lecture 107 Where are the control dependences? 1 Loop:LDF0, 0 (R1) 2ADDDF4, F0, F2 3SD0 (R1), F4 4SUBIR1, R1, #8 5BEQZR1, exit 6LDF0, 0 (R1) 7ADDDF4, F0, F2 8SD0 (R1), F4 9SUBIR1, R1, #8 10BEQZR1, exit 11LDF0, 0 (R1) 12ADDDF4, F0, F2 13SD0 (R1), F4 14SUBIR1, R1, #8 15BEQZR1, exit....

ENGS 116 Lecture 108 Data Dependences 1 Loop:LDF0, 0 (R1) 2ADDDF4, F0, F2 3SD0 (R1), F4 ; drop SUBI & BNEZ 4LDF0, –8 (R1) 2ADDDF4, F0, F2 3SD–8 (R1), F4 ; drop SUBI & BNEZ 7LDF0, –16 (R1) 8ADDDF4, F0, F2 9SD–16 (R1), F4 ; drop SUBI & BNEZ 10LDF0, –24 (R1) 11ADDDF4, F0, F2 12SD–24 (R1), F4 13SUBIR1, R1, #32; alter to 4*8 14BNEZR1, LOOP 15NOP

ENGS 116 Lecture 109 Name Dependences 1 Loop:LDF0, 0 (R1) 2ADDDF4, F0, F2 3SD0 (R1), F4 ; drop SUBI & BNEZ 4LDF6, –8 (R1) 5ADDDF8, F6, F2 6SD–8 (R1), F8 ; drop SUBI & BNEZ 7LDF10, –16 (R1) 8ADDDF12, F10, F2 9SD–16 (R1), F12 ; drop SUBI & BNEZ 10LDF14, –24 (R1) 11ADDDF16, F14, F2 12SD–24 (R1), F16 13SUBIR1, R1, #32; alter to 4*8 14BNEZR1, LOOP 15NOP Register renaming

ENGS 116 Lecture 1010 Unroll Loop Four Times Rewrite loop to minimize stalls?  (1+2) +1 = 28 clock cycles to initiate, or 7 per iteration Assumes R1 is multiple of 4

ENGS 116 Lecture 1011 Unrolled Loop That Minimizes Stalls What assumptions were made when we moved code? -OK to move store past SUBI even though SUBI changes the register -OK to move loads before stores: get right data? -When is it safe for compiler to do such changes? 14+1=15 clock cycles, or 3.75 per iteration Can we eliminate the remaining stall?

ENGS 116 Lecture 1012 Compiler Loop Unrolling Most important: Code Correctness Unrolling produces larger code that might interfere with cache: –Code sequence no longer fits in L1 cache –Cache to memory bandwidth might not be wide enough Compiler must understand hardware: –Enough registers must be available OR –Compiler must rely on hardware register renaming Compiler must understand the code: –Determine that loop iterations are independent –Eliminate branch instructions while preserving correctness –Determine that the LD and SD are independent over the loop –Rescheduling of instructions and adjusting the offsets

ENGS 116 Lecture 1013 Superscalar Example Superscalar: –Our system can issue one floating point and one other (non-floating point) instruction per cycle. –Instructions are dynamically scheduled from the window –Unroll the loop 5 times and reschedule to minimize cycles per iteration. (WHY?) While Integer/FP split is simple for the HW, get CPI of 0.5 only for programs with: –Exactly 50% FP operations –No hazards If more instructions issued at same time, greater difficulty in decode and issue –Even 2-way scalar  examine 2 opcodes, 6 register specifiers, & decide if 1 or 2 instructions can issue

ENGS 116 Lecture 1014 Loop Unrolling in Superscalar Integer instructionFP instructionClock cycle Loop:LD F0, 0 (R1)1 LD F6, –8 (R1)2 LD F10, –16 (R1)ADDD F4, F0, F23 LD F14, –24 (R1)ADDD F8, F6, F24 LD F18, –32 (R1)ADDD F12, F10, F25 SD 0 (R1), F4ADDD F16, F14, F26 SD –8 (R1), F8ADDD F20, F18, F27 SD –16 (R1), F128 SUBI R1, R1, #409 SD 16 (R1), F16 10 BNEZ R1, Loop11 SD 8 (R1), F2012 Unrolled 5 times to avoid delays (+ 1 due to SS) 12 clocks to initiate, or 2.4 clocks per iteration

ENGS 116 Lecture 1015 VLIW Example VLIW: –5 instructions in one very long instruction word. 2 FP, 2 Memory, 1 branch/integer –Compiler avoids hazards –Not all slots are always full VLIW: tradeoff instruction space for simple decoding –The long instruction word has room for many operations –By definition, all the operations the compiler puts in the long instruction word are independent  execute in parallel –E.g., 2 integer operations, 2 FP ops, 2 memory refs, 1 branch  16 to 24 bits per field  7*16 or 112 bits to 7*24 or 168 bits wide –Need compiling technique that schedules across several branches

ENGS 116 Lecture 1016 Loop Unrolling in VLIW Memory MemoryFPFPInt. op/Clock reference 1reference 2operation 1 op. 2 branch LD F0, 0 (R1)LD F6, –8 (R1)1 LD F10, –16 (R1)LD F14, –24 (R1)2 LD F18, –32 (R1)LD F22, –40 (R1)ADDD F4, F0, F2ADDD F8, F6, F23 LD F26, –48 (R1)ADDD F12, F10, F2ADDD F16, F14, F24 ADDD F20, F18, F2ADDD F24, F22, F25 SD 0 (R1), F4SD –8 (R1), F8ADDD F28, F26, F26 SD –16 (R1), F12SD –24 (R1), F167 SD –32 (R1), F20SD –40 (R1), F24SUBI R1, R1, #488 SD 0 (R1), F28BNEZ R1, LOOP9 Unrolled 7 times to avoid delays 9 clocks to initiate, or 1.3 clocks per iteration Average: 2.5 ops per clock, 50% efficiency Note: Need more registers in VLIW (15 vs. 6 in SS)

ENGS 116 Lecture 1017 Limits to Multi-Issue Machines Inherent limitations of instruction-level parallelism –1 branch in 5: How to keep a 5-way VLIW busy? –Latencies of units: many operations must be scheduled –Easy: More instruction bandwidth –Easy: Duplicate functional units to get parallel execution –Hard: Increase ports to register file (bandwidth) VLIW example needs 7 reads and 3 writes for integer registers & 5 reads and 3 writes for FP registers –Harder: Increase ports to memory (bandwidth) –Pipelines in lockstep: One pipeline stall, stalls all others to avoid hazards

ENGS 116 Lecture 1018 Limits to Multi-Issue Machines Limitations specific to either superscalar or VLIW implementation –Decode issue in superscalar: how wide is practical? –VLIW code size: unroll loops + wasted fields in VLIW IA-64 compresses dependent instructions, but still larger –VLIW lock step  1 hazard & all instructions stall IA-64 not lock step? Dynamic pipeline? –VLIW & binary compatibility: IA-64 promises binary compatibility

ENGS 116 Lecture 1019 Dependences Two instructions are parallel if they can execute simultaneously in a pipeline without causing any stalls (assuming no structural hazards) and can be reordered (depending on code semantics) Two instructions that are dependent are not parallel and cannot be reordered Types of dependences –Data dependences –Name dependences –Control dependences Dependences are properties of programs Hazards are properties of the pipeline organization Dependence indicates the potential for a hazard

ENGS 116 Lecture 1020 Compiler Perspectives on Code Movement Hard for memory accesses –Does 100(R4) = 20 (R6)? –From different loop iterations, does 20(R6) = 20(R6)? Our example required compiler to know that if R1 doesn’t change then: 0(R1)  -8 (R1)  -16 (R1)  -24 (R1) There were no dependences between some loads and stores so they could be moved by each other

ENGS 116 Lecture 1021 Detecting Loop Level Dependences for (i=1; i<=100; i=i+1) { A[i] = A[i] + B[i];/* S1 */ B[i+1] = C[i] + D[i];/* S2 */ } Loop carried dependence: S1 relies on the S2 of the previous iteration There is no dependence between S1 and S2, consider: A[1] = A[1] + B[1]; for (i=1; i<=99; i=i+1) { B[i+1] = C[i] + D[i]; A[i+1] = A [i+1] + B[i+1]; } B[101] = C[100] + D[100];

ENGS 116 Lecture 1022 Dependence Distance for (i=6; i<=100; i=i+1) Y[i] = Y[i-5] + Y[i]; Loop carried dependence in the form of a recurrence of Y Dependence distance of 5 Higher dependence distance allows for more ILP

ENGS 116 Lecture 1023 Greatest Common Divisor test Affine array indices: –All array indices DIRECTLY depend on loop variable i Assume the code properties: –for loop runs from n to m with index i –loop has an access pattern: X [a * i +b] = X [c * i +d] … –two values for i: j and k both between n and m –store indexed by j and a load later on index by k with: a*j+b = c*k+d A loop carried dependence exists if GCD (c,a) must divide (d-b) a=2, b=3, c=2, d=0 GDC(a,c) = 2 and d-b = -3 There is no loop dependence since 2 does not divide -3 for (i=1; i<=100; i=i+1) X[2*i+3] = X[2*i] * 5.0;

ENGS 116 Lecture 1024 Problem Cases Reference by pointers instead of array indices –partly eliminated by strict type checking Sparse arrays with indexing through other arrays (similar to pointers) When a dependence exists for values of the indices but those values are never reached The loop-carried dependence has a distance far greater than what loop- unrolling would cover

ENGS 116 Lecture 1025 Software Pipelining Observation: if iterations from loops are independent, then can get more ILP by taking instructions from different iterations Software pipelining: reorganizes loops so that each iteration is made from instructions chosen from different iterations of the original loop Software- Iteration pipelined iteration

ENGS 116 Lecture 1026 SW Pipelining Example 1LDF0, 0 (R1)LDF0, 0 (R1) 2ADDDF4, F0, F2ADDDF4, F0, F2 3SD0 (R1), F4LDF0, –8 (R1) 4LDF6, –8 (R1)1SD0 (R1), F4Stores M[i] 5ADDDF8, F6, F22ADDDF4, F0, F2Adds to M[i-1] 6SD–8, (R1), F83LDF0, –16 (R1)Loads M[i-2] 7LDF10, –16 (R1)4SUBIR1, R1, #8 8ADDDF12, F10, F25BNEZR1, LOOP 9SD–16 (R1), F12SD0 (R1), F4 10SUBIR1, R1, #24ADDDF4, F0, F2 11BNEZR1, LOOPSD–8 (R1), F4 Read F4 Read F0 SDIFIDEXMemWB Write F4 ADDIFIDEXMemWB LDIFIDEXMemWB Write F0 Before: Unrolled 3 timesAfter: Software Pipelined

ENGS 116 Lecture 1027 SW Pipelining Example Symbolic Loop Unrolling –Smaller code space –Overhead paid only once vs. each iteration in loop unrolling 100 iterations = 25 loops with 4 unrolled iterations each Number of overlapped operations Software Pipelining (a) Software pipeliningTime Loop Unrolling (b) Loop unrolling Time Number of overlapped operations