CS152 Computer Architecture and Engineering Lecture 13 Static Pipeline Scheduling Compiler Optimizations March 15, 2004 John Kubiatowicz (http.cs.berkeley.edu/~kubitron)

Slides:



Advertisements
Similar presentations
CMSC 611: Advanced Computer Architecture Tomasulo Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
Advertisements

Instruction-level Parallelism Compiler Perspectives on Code Movement dependencies are a property of code, whether or not it is a HW hazard depends on.
1 Pipelining Part 2 CS Data Hazards Data hazards occur when the pipeline changes the order of read/write accesses to operands that differs from.
1 Lecture 3 Pipeline Contd. (Appendix A) Instructor: L.N. Bhuyan CS 203A Advanced Computer Architecture Some slides are adapted from Roth.
A scheme to overcome data hazards
ENGS 116 Lecture 101 ILP: Software Approaches Vincent H. Berk October 12 th Reading for today: , 4.1 Reading for Friday: 4.2 – 4.6 Homework #2:
CPE 631: ILP, Static Exploitation Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic,
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of.
FTC.W99 1 Advanced Pipelining and Instruction Level Parallelism (ILP) ILP: Overlap execution of unrelated instructions gcc 17% control transfer –5 instructions.
Dynamic ILP: Scoreboard Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
Eliminating Stalls Using Compiler Support. Instruction Level Parallelism gcc 17% control transfer –5 instructions + 1 branch –Reordering among 5 instructions.
ILP: Loop UnrollingCSCE430/830 Instruction-level parallelism: Loop Unrolling CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng.
Lecture 6: ILP HW Case Study— CDC 6600 Scoreboard & Tomasulo’s Algorithm Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
1 COMP 740: Computer Architecture and Implementation Montek Singh Tue, Feb 24, 2009 Topic: Instruction-Level Parallelism IV (Software Approaches/Compiler.
EEL Advanced Pipelining and Instruction Level Parallelism Lotzi Bölöni.
Computer Architecture Instruction Level Parallelism Dr. Esam Al-Qaralleh.
CS 152 Lec 14.1 CS 152: Computer Architecture and Engineering Lecture 14 Advanced Pipelining/Compiler Scheduling Randy H. Katz, Instructor Satrajit Chatterjee,
Dynamic Branch PredictionCS510 Computer ArchitecturesLecture Lecture 10 Dynamic Branch Prediction, Superscalar, VLIW, and Software Pipelining.
CS152 Lec15.1 Advanced Topics in Pipelining Loop Unrolling Super scalar and VLIW Dynamic scheduling.
Pipelining 5. Two Approaches for Multiple Issue Superscalar –Issue a variable number of instructions per clock –Instructions are scheduled either statically.
CMSC 611: Advanced Computer Architecture Scoreboard Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
Static Scheduling for ILP Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
CS252 Graduate Computer Architecture Lecture 6 Static Scheduling, Scoreboard February 6 th, 2012 John Kubiatowicz Electrical Engineering and Computer Sciences.
EECC551 - Shaaban #1 lec # 6 Fall Multiple Instruction Issue: CPI < 1 To improve a pipeline’s CPI to be better [less] than one, and to.
Pipeline ComplicationsCS510 Computer ArchitecturesLecture Lecture 8 Advanced Pipeline.
1 IF IDEX MEM L.D F4,0(R2) MUL.D F0, F4, F6 ADD.D F2, F0, F8 L.D F2, 0(R2) WB IF IDM1 MEM WBM2M3M4M5M6M7 stall.
1 Lecture: Pipeline Wrap-Up and Static ILP Topics: multi-cycle instructions, precise exceptions, deep pipelines, compiler scheduling, loop unrolling, software.
EECC551 - Shaaban #1 Spring 2006 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
CS252/Kubiatowicz Lec 5.1 9/10/99 CS252 Graduate Computer Architecture Lecture 5 Introduction to Advanced Pipelining September 10, 1999 Prof. John Kubiatowicz.
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
CS152 / Kubiatowicz Lec14.1 3/17/99©UCB Spring 1999 CS152 Computer Architecture and Engineering Lecture 14 Introduction to Advanced Pipelining March 17,
COMP381 by M. Hamdi 1 Superscalar Processors. COMP381 by M. Hamdi 2 Recall from Pipelining Pipeline CPI = Ideal pipeline CPI + Structural Stalls + Data.
1 Lecture 5: Pipeline Wrap-up, Static ILP Basics Topics: loop unrolling, VLIW (Sections 2.1 – 2.2) Assignment 1 due at the start of class on Thursday.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Oct. 9, 2002 Topic: Instruction-Level Parallelism (Multiple-Issue, Speculation)
Review of CS 203A Laxmi Narayan Bhuyan Lecture2.
1 IBM System 360. Common architecture for a set of machines. Robert Tomasulo worked on a high-end machine, the Model 91 (1967), on which they implemented.
EECC551 - Shaaban #1 lec # 8 Winter Multiple Instruction Issue: CPI < 1 To improve a pipeline’s CPI to be better [less] than one, and to.
CIS 629 Fall 2002 Multiple Issue/Speculation Multiple Instruction Issue: CPI < 1 To improve a pipeline’s CPI to be better [less] than one, and to utilize.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Oct 5, 2005 Topic: Instruction-Level Parallelism (Dynamic Scheduling: Scoreboarding)
EECC551 - Shaaban #1 Spring 2004 lec# Definition of basic instruction blocks Increasing Instruction-Level Parallelism & Size of Basic Blocks.
COMP381 by M. Hamdi 1 Loop Level Parallelism Instruction Level Parallelism: Loop Level Parallelism.
CS252 Graduate Computer Architecture Lecture 5 Software Scheduling around Hazards Out-of-order Scheduling February 2 nd, 2011 John Kubiatowicz Electrical.
CS252 Graduate Computer Architecture Lecture 5 Interrupts, Software Scheduling around Hazards February 1 st, 2012 John Kubiatowicz Electrical Engineering.
Nov. 9, Lecture 6: Dynamic Scheduling with Scoreboarding and Tomasulo Algorithm (Section 2.4)
Out-of-order execution: Scoreboarding and Tomasulo Week 2
1 Appendix A Pipeline implementation Pipeline hazards, detection and forwarding Multiple-cycle operations MIPS R4000 CDA5155 Spring, 2007, Peir / University.
Pipeline ComplicationsCS510 Computer ArchitecturesLecture Lecture 8 Advanced Pipeline.
Lecture 5: Pipelining & Instruction Level Parallelism Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
1 Chapter 2: ILP and Its Exploitation Review simple static pipeline ILP Overview Dynamic branch prediction Dynamic scheduling, out-of-order execution Hardware-based.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
CMPE 421 Parallel Computer Architecture
LECTURE 10 Pipelining: Advanced ILP. EXCEPTIONS An exception, or interrupt, is an event other than regular transfers of control (branches, jumps, calls,
04/03/2016 slide 1 Dynamic instruction scheduling Key idea: allow subsequent independent instructions to proceed DIVDF0,F2,F4; takes long time ADDDF10,F0,F8;
Csci 136 Computer Architecture II – Superscalar and Dynamic Pipelining Xiuzhen Cheng
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue
CS203 – Advanced Computer Architecture
CSCE430/830 Computer Architecture
CPE 631 Lecture 13: Exploiting ILP with SW Approaches
Pipelining: Advanced ILP
John Kubiatowicz (http.cs.berkeley.edu/~kubitron)
Lecture 7: Dynamic Scheduling with Tomasulo Algorithm (Section 2.4)
CC423: Advanced Computer Architecture ILP: Part V – Multiple Issue
CS152 Computer Architecture and Engineering Lecture 16 Compiler Optimizations (Cont) Dynamic Scheduling with Scoreboards.
John Kubiatowicz (http.cs.berkeley.edu/~kubitron)
Key to pipelining: smooth flow Hazards limit performance
CS252 Graduate Computer Architecture Lecture 5 Software Scheduling around Hazards Out-of-order Scheduling John Kubiatowicz Electrical Engineering and.
Lecture 7 Dynamic Scheduling
CPE 631 Lecture 14: Exploiting ILP with SW Approaches (2)
CMSC 611: Advanced Computer Architecture
Lecture 5: Pipeline Wrap-up, Static ILP
Presentation transcript:

CS152 Computer Architecture and Engineering Lecture 13 Static Pipeline Scheduling Compiler Optimizations March 15, 2004 John Kubiatowicz (http.cs.berkeley.edu/~kubitron) lecture slides:

CS152 / Kubiatowicz Lec13.2 3/15/04©UCB Spring 2004 “Forward” result from one stage to another “or” OK if define read/write properly Recall: Data Hazard Solution: Forwarding I n s t r. O r d e r Time (clock cycles) add r1,r2,r3 sub r4,r1,r3 and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 IFIF ID/R F EXEX ME M WBWB ALU Im Reg Dm Reg ALU Im Reg DmReg ALU Im Reg DmReg Im ALU Reg DmReg ALU Im Reg DmReg

CS152 / Kubiatowicz Lec13.3 3/15/04©UCB Spring 2004 Recall: Resolve RAW by “forwarding” (or bypassing) °Detect nearest valid write op operand register and forward into op latches, bypassing remainder of the pipe Increase muxes to add paths from pipeline registers Data Forwarding = Data Bypassing npc I mem Regs B alu S D mem m IAU PC Regs A imoprwn oprwn oprwn op rw rs rt Forward mux

CS152 / Kubiatowicz Lec13.4 3/15/04©UCB Spring 2004 FYI: MIPS R3000 clocking discipline °2-phase non-overlapping clocks °Pipeline stage is two (level sensitive) latches phi1 phi2 phi1 phi2 Edge-triggered

CS152 / Kubiatowicz Lec13.5 3/15/04©UCB Spring 2004 MIPS R3000 Instruction Pipeline Inst Fetch Decode Reg. Read ALU / E.AMemoryWrite Reg TLB I-Cache RF Operation WB E.A. TLB D-Cache TLB I-cache RF ALUALU TLB D-Cache WB Resource Usage Write in phase 1, read in phase 2 => eliminates bypass from WB

CS152 / Kubiatowicz Lec13.6 3/15/04©UCB Spring 2004 Thus, only 2 levels of forwarding I n s t r. O r d e r Time (clock cycles) add r1,r2,r3 sub r4,r1,r3 and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 IFIF ID/R F EXEX ME M WBWB ALU Im Reg Dm Reg ALU Im Reg DmReg ALU Im Reg DmReg Im ALU Reg DmReg ALU Im Reg DmReg With MIPS R3000 pipeline, no need to forward from WB stage

CS152 / Kubiatowicz Lec13.7 3/15/04©UCB Spring 2004 Recall: Examples of stalls/bubbles °Exceptions: Flush everything above Prevent instructions following exception from commiting state Freeze fetch until exception resolved °Stalls: Introduce brief stalls into pipeline Decode stage recognizes that current instruction cannot proceed Freeze fetch stage Introduce “bubble” into EX stage (instead of forwarding stalled inst) Can stall until condition is resolved Examples: -mfhi, mflo: need to wait for multiply/divide unit to finish -“Break” instruction for Lab5: stall until release line received -Load delay slot handled this way as well

CS152 / Kubiatowicz Lec13.8 3/15/04©UCB Spring 2004 Recall: Freeze above & Bubble Below °Flush accomplished by setting “invalid” bit in pipeline npc I mem Regs B alu S D mem m IAU PC Regs A imoprwn oprwn oprwn op rw rs rt bubble freeze

CS152 / Kubiatowicz Lec13.9 3/15/04©UCB Spring 2004 Recall: Achieving Precise Exceptions °Use pipeline to sort this out! Pass exception status along with instruction. Keep track of PCs for every instruction in pipeline. Don’t act on exception until it reach WB stage °Handle interrupts through “faulting noop” in IF stage °When instruction reaches end of MEM stage: Save PC  EPC, Interrupt vector addr  PC Turn all instructions in earlier stages into noops! Program Flow Time IFetchDcdExecMemWB IFetchDcdExecMemWB IFetchDcdExecMemWB IFetchDcdExecMemWB Data TLB Bad Inst Inst TLB fault Overflow

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Recall: What about memory operations? AB op Rd Ra Rb Rd to reg file R Rd ºIf instructions are initiated in order and operations always occur in the same stage, there can be no hazards between memory operations! ºWhat about data dependence on loads? R1 <- R4 + R5 R2 <- Mem[ R2 + I ] R3 <- R2 + R1  “Delayed Loads” ºCan recognize this in decode stage and introduce bubble while stalling fetch stage (hint for lab 4!) ºTricky situation: R1 <- Mem[ R2 + I ] Mem[R3+34] <- R1 Handle with bypass in memory stage! D Mem T

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 MIPS R3000 Multicycle Operations Ex: Multiply, Divide, Cache Miss Use control word of local stage to step through multicycle operation Stall all stages above multicycle operation in the pipeline Drain (bubble) stages below it Alternatively, launch multiply/divide to autonomous unit, only stall pipe if attempt to get result before ready - This means stall mflo/mfhi in decode stage if multiply/divide still executing - Extra credit in Lab 5 does this AB op Rd Ra Rb mul Rd Ra Rb Rd to reg file R T Rd

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Case Study: MIPS R4000 (200 MHz) °8 Stage Pipeline: IF–first half of fetching of instruction; PC selection happens here as well as initiation of instruction cache access. IS–second half of access to instruction cache. RF–instruction decode and register fetch, hazard checking and also instruction cache hit detection. EX–execution, which includes effective address calculation, ALU operation, and branch target computation and condition evaluation. DF–data fetch, first half of access to data cache. DS–second half of access to data cache. TC–tag check, determine whether the data cache access hit. WB–write back for loads and register-register operations. °8 Stages: What is impact on Load delay? Branch delay? Why?

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Case Study: MIPS R4000 WB TC DS DF EX RF IS IF IS IF RF IS IF EX RF IS IF DF EX RF IS IF DS DF EX RF IS IF TC DS DF EX RF IS IF TWO Cycle Load Latency IFIS IF RF IS IF EX RF IS IF DF EX RF IS IF DS DF EX RF IS IF TC DS DF EX RF IS IF WB TC DS DF EX RF IS IF THREE Cycle Branch Latency (conditions evaluated during EX phase) Delay slot plus two stalls Branch likely cancels delay slot if not taken

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 MIPS R4000 Floating Point °FP Adder, FP Multiplier, FP Divider °Last step of FP Multiplier/Divider uses FP Adder HW °8 kinds of stages in FP units: StageFunctional unitDescription AFP adderMantissa ADD stage DFP dividerDivide pipeline stage EFP multiplierException test stage MFP multiplierFirst stage of multiplier NFP multiplierSecond stage of multiplier RFP adderRounding stage SFP adderOperand shift stage UUnpack FP numbers

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 MIPS FP Pipe Stages FP Instr … Add, SubtractUS+AA+RR+S MultiplyUE+MMMMNN+AR DivideUARD 28 …D+AD+R, D+R, D+A, D+R, A, R Square rootUE(A+R) 108 …AR NegateUS Absolute valueUS FP compareUAR Stages: MFirst stage of multiplier NSecond stage of multiplier RRounding stage SOperand shift stage UUnpack FP numbers AMantissa ADD stage DDivide pipeline stage EException test stage

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Recall: Compute CPI? °Start with Base CPI °Add stalls °Suppose: CPI base =1 Freq branch =20%, freq load =30% Suppose branches always cause 1 cycle stall Loads cause a 100 cycle stall 1% of time °Then: CPI = 1 + (1  0.20)+(100  0.30  0.01)=1.5 °Multicycle? Could treat as: CPI stall =(CYCLES-CPI base )  freq inst

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 R4000 Performance °Not ideal CPI of 1: FP structural stalls: Not enough FP hardware (parallelism) FP result stalls: RAW data hazard (latency) Branch stalls (2 cycles + unfilled slots) Load stalls (1 or 2 clock cycles) eqntott espresso gcc li doduc nasa7 ora spice2g6 su2cor tomcatv BaseLoad stallsBranch stallsFP result stallsFP structural stalls

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Administrivia °Midterm almost graded This test was too long! (Sorry) More on Wednesday °Should start working on Lab 4! Remember: a Working processor is necessary for full credit… Due Tonight: Problem 0 (Lab evaluation of Lab 4) Due Tomorrow night: Division of labor for Lab 5 °Get started early!!! °Wednesday night: Your design document due Discuss them in section on Thursday Final due Thursday at Midnight °More info on advanced topics: Computer Architecture: A Quantitative Approach by John Hennessy and David Patterson Should be on reserve at Engineering library

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Administrivia: Pentium-4 Architecture! °Microprocessor Report: August Pipeline Stages! Drive  Wire Delay! Trace-Cache: caching paths through the code for quick decoding. Renaming: similar to Tomasulo architecture Branch and DATA prediction! Pentium (Original 586) Pentium-II (and III) (Original 686)

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Can we somehow make CPI closer to 1? °Let’s assume full pipelining: If we have a 4-cycle instruction, then we need 3 instructions between a producing instruction and its use: multf$F0,$F2,$F4 delay-1 delay-2 delay-3 addf$F6,$F10,$F0 FetchDecodeEx1Ex2Ex3Ex4WB multf delay1delay2 delay3 addf Earliest forwarding for 4-cycle instructions Earliest forwarding for 1-cycle instructions

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 FP Loop: Where are the Hazards? Loop:LDF0,0(R1);F0=vector element ADDDF4,F0,F2;add scalar from F2 SD0(R1),F4;store result SUBIR1,R1,8;decrement pointer 8B (DW) BNEZR1,Loop;branch R1!=zero NOP;delayed branch slot InstructionInstructionLatency in producing resultusing result clock cycles FP ALU opAnother FP ALU op3 FP ALU opStore double2 Load doubleFP ALU op1 Load doubleStore double0 Integer opInteger op0 Where are the stalls?

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 FP Loop Showing Stalls ° 9 clocks: Rewrite code to minimize stalls? InstructionInstructionLatency in producing resultusing result clock cycles FP ALU opAnother FP ALU op3 FP ALU opStore double2 Load doubleFP ALU op1 1 Loop:LDF0,0(R1);F0=vector element 2stall 3ADDDF4,F0,F2;add scalar in F2 4stall 5stall 6 SD0(R1),F4;store result 7 SUBIR1,R1,8;decrement pointer 8B (DW) 8 BNEZR1,Loop;branch R1!=zero 9stall;delayed branch slot

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Revised FP Loop Minimizing Stalls 6 clocks: Unroll loop 4 times code to make faster? InstructionInstructionLatency in producing resultusing result clock cycles FP ALU opAnother FP ALU op3 FP ALU opStore double2 Load doubleFP ALU op1 1 Loop:LDF0,0(R1) 2stall 3ADDDF4,F0,F2 4SUBIR1,R1,8 5BNEZR1,Loop;delayed branch 6 SD8(R1),F4;altered when move past SUBI Swap BNEZ and SD by changing address of SD

CS152 / Kubiatowicz Lec /15/04©UCB Spring Loop:LDF0,0(R1) 2ADDDF4,F0,F2 3SD0(R1),F4 ;drop SUBI & BNEZ 4LDF6,-8(R1) 5ADDDF8,F6,F2 6SD-8(R1),F8 ;drop SUBI & BNEZ 7LDF10,-16(R1) 8ADDDF12,F10,F2 9SD-16(R1),F12 ;drop SUBI & BNEZ 10LDF14,-24(R1) 11ADDDF16,F14,F2 12SD-24(R1),F16 13SUBIR1,R1,#32;alter to 4*8 14BNEZR1,LOOP 15NOP x (1+2) = 27 clock cycles, or 6.8 per iteration Assumes R1 is multiple of 4 CPI = 27/15 = 1.8 Unroll Loop Four Times (straightforward way) Rewrite loop to minimize stalls? 1 cycle stall 2 cycles stall

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 °What assumptions made when moved code? OK to move store past SUBI even though changes register OK to move loads before stores: get right data? When is it safe for compiler to do such changes? 1 Loop:LDF0,0(R1) 2LDF6,-8(R1) 3LDF10,-16(R1) 4LDF14,-24(R1) 5ADDDF4,F0,F2 6ADDDF8,F6,F2 7ADDDF12,F10,F2 8ADDDF16,F14,F2 9SD0(R1),F4 10SD-8(R1),F8 11SD-16(R1),F12 12SUBIR1,R1,#32 13BNEZR1,LOOP 14SD8(R1),F16; 8-32 = clock cycles, or 3.5 per iteration CPI = 14/14 = 1 When safe to move instructions? Unrolled Loop That Minimizes Stalls

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 °Two main variations: Superscalar and VLIW °Superscalar: varying no. instructions/cycle (1 to 6) Parallelism and dependencies determined/resolved by HW IBM PowerPC 604, Sun UltraSparc, DEC Alpha 21164, HP 7100 °Very Long Instruction Words (VLIW): fixed number of instructions (16) parallelism determined by compiler Pipeline is exposed; compiler must schedule delays to get right result °Explicit Parallel Instruction Computer (EPIC)/ Intel 128 bit packets containing 3 instructions (can execute sequentially) Can link 128 bit packets together to allow more parallelism Compiler determines parallelism, HW checks dependencies and fowards/stalls Getting CPI < 1: Issuing Multiple Instructions/Cycle

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 °Superscalar DLX: 2 instructions, 1 FP & 1 anything else – Fetch 64-bits/clock cycle; Int on left, FP on right – Can only issue 2nd instruction if 1st instruction issues – More ports for FP registers to do FP load & FP op in a pair TypePipeStages Int. instructionIFIDEXMEMWB FP instructionIFIDEXMEMWB Int. instructionIFIDEXMEMWB FP instructionIFIDEXMEMWB Int. instructionIFIDEXMEMWB FP instructionIFIDEXMEMWB ° 1 cycle load delay expands to 3 instructions in SS instruction in right half can’t use it, nor instructions in next slot Getting CPI < 1: Issuing Multiple Instructions/Cycle

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Integer instructionFP instructionClock cycle Loop:LD F0,0(R1)1 LD F6,-8(R1)2 LD F10,-16(R1)ADDD F4,F0,F23 LD F14,-24(R1)ADDD F8,F6,F24 LD F18,-32(R1)ADDD F12,F10,F25 SD 0(R1),F4ADDD F16,F14,F26 SD -8(R1),F8ADDD F20,F18,F27 SD -16(R1),F128 SD -24(R1),F169 SUBI R1,R1,#4010 BNEZ R1,LOOP11 SD -32(R1),F2012 Loop Unrolling in Superscalar °Unrolled 5 times to avoid delays (+1 due to SS) °12 clocks, or 2.4 clocks per iteration

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Limits of Superscalar °While Integer/FP split is simple for the HW, get CPI of 0.5 only for programs with: Exactly 50% FP operations No hazards °If more instructions issue at same time, greater difficulty of decode and issue Even 2-scalar => examine 2 opcodes, 6 register specifiers, & decide if 1 or 2 instructions can issue °VLIW: tradeoff instruction space for simple decoding The long instruction word has room for many operations By definition, all the operations the compiler puts in the long instruction word can execute in parallel E.g., 2 integer operations, 2 FP ops, 2 Memory refs, 1 branch -16 to 24 bits per field => 7*16 or 112 bits to 7*24 or 168 bits wide Need compiling technique that schedules across several branches

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Memory MemoryFPFPInt. op/Clock reference 1reference 2operation 1 op. 2 branch LD F0,0(R1)LD F6,-8(R1)1 LD F10,-16(R1)LD F14,-24(R1)2 LD F18,-32(R1)LD F22,-40(R1)ADDD F4, F0, F2ADDD F8,F6,F23 LD F26,-48(R1)ADDD F12, F10, F2ADDD F16,F14,F24 ADDD F20, F18, F2ADDD F24,F22,F25 SD 0(R1),F4SD -8(R1),F8ADDD F28, F26, F26 SD -16(R1),F12SD -24(R1),F167 SD -32(R1),F20SD -40(R1),F24SUBI R1,R1,#488 SD -0(R1),F28BNEZ R1,LOOP9 Loop Unrolling in VLIW Unrolled 7 times to avoid delays 7 results in 9 clocks, or 1.3 clocks per iteration Need more registers in VLIW(EPIC => 128int + 128FP)

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Software Pipelining °Observation: if iterations from loops are independent, then can get more ILP by taking instructions from different iterations °Software pipelining: reorganizes loops so that each iteration is made from instructions chosen from different iterations of the original loop (­ Tomasulo in SW)

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Before: Unrolled 3 times 1 LDF0,0(R1) 2 ADDDF4,F0,F2 3 SD0(R1),F4 4 LDF6,-8(R1) 5 ADDDF8,F6,F2 6 SD-8(R1),F8 7 LDF10,-16(R1) 8 ADDDF12,F10,F2 9 SD-16(R1),F12 10 SUBIR1,R1,#24 11 BNEZR1,LOOP After: Software Pipelined 1 SD0(R1),F4 ;Stores M[i] 2 ADDDF4,F0,F2 ;Adds to M[i-1] 3 LDF0,-16(R1);Loads M[i-2] 4 SUBIR1,R1,#8 5 BNEZR1,LOOP Symbolic Loop Unrolling – Maximize result-use distance – Less code space than unrolling – Fill & drain pipe only once per loop vs. once per each unrolled iteration in loop unrolling SW Pipeline Loop Unrolled overlapped ops Time Software Pipelining Example

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Software Pipelining with Loop Unrolling in VLIW Memory MemoryFPFPInt. op/ Clock reference 1reference 2operation 1 op. 2 branch LD F0,-48(R1)ST 0(R1),F4ADDD F4,F0,F21 LD F6,-56(R1)ST -8(R1),F8ADDD F8,F6,F2SUBI R1,R1,#242 LD F10,-40(R1)ST 8(R1),F12ADDD F12,F10,F2BNEZ R1,LOOP3 °Software pipelined across 9 iterations of original loop In each iteration of above loop, we: -Store to m,m-8,m-16(iterations I-3,I-2,I-1) -Compute for m-24,m-32,m-40(iterations I,I+1,I+2) -Load from m-48,m-56,m-64(iterations I+3,I+4,I+5) °9 results in 9 cycles, or 1 clock per iteration °Average: 3.3 ops per clock, 66% efficiency Note: Need less registers for software pipelining (only using 7 registers here, was using 15)

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Can we use HW to get CPI closer to 1? °Why in HW at run time? Works when can’t know real dependence at compile time Compiler simpler Code for one machine runs well on another °Key idea: Allow instructions behind stall to proceed: DIVDF0,F2,F4 ADDDF10,F0,F8 SUBDF12,F8,F14 °Out-of-order execution => out-of-order completion. °Disadvantages? Complexity Precise interrupts harder! (Talk about this next time)

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Problems? °How do we prevent WAR and WAW hazards? °How do we deal with variable latency? Forwarding for RAW hazards harder. RAW WAR

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 °The Five Classic Components of a Computer °Today’s Topics: Recap last lecture Hardware loop unrolling with Tomasulo algorithm Administrivia Speculation, branch prediction Reorder buffers The Big Picture: Where are We Now? Control Datapath Memory Processor Input Output

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Scoreboard: a bookkeeping technique °Out-of-order execution divides ID stage: 1.Issue—decode instructions, check for structural hazards 2.Read operands—wait until no data hazards, then read operands °Scoreboards date to CDC6600 in 1963 °Instructions execute whenever not dependent on previous instructions and no hazards. °CDC 6600: In order issue, out-of-order execution, out- of-order commit (or completion) No forwarding! Imprecise interrupt/exception model for now

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Scoreboard Architecture(CDC 6600) Functional Units Registers FP Mult FP Divide FP Add Integer Memory SCOREBOARD

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Scoreboard Implications °Out-of-order completion => WAR, WAW hazards? °Solutions for WAR: Stall writeback until registers have been read Read registers only during Read Operands stage °Solution for WAW: Detect hazard and stall issue of new instruction until other instruction completes °No register renaming! °Need to have multiple instructions in execution phase => multiple execution units or pipelined execution units °Scoreboard keeps track of dependencies between instructions that have already issued. °Scoreboard replaces ID, EX, WB with 4 stages

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Four Stages of Scoreboard Control °Issue—decode instructions & check for structural hazards (ID1) Instructions issued in program order (for hazard checking) Don’t issue if structural hazard Don’t issue if instruction is output dependent on any previously issued but uncompleted instruction (no WAW hazards) °Read operands—wait until no data hazards, then read operands (ID2) All real dependencies (RAW hazards) resolved in this stage, since we wait for instructions to write back data. No forwarding of data in this model!

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Four Stages of Scoreboard Control °Execution—operate on operands (EX) The functional unit begins execution upon receiving operands. When the result is ready, it notifies the scoreboard that it has completed execution. °Write result—finish execution (WB) Stall until no WAR hazards with previous instructions: Example: DIVDF0,F2,F4 ADDDF10,F0,F8 SUBDF8,F8,F14 CDC 6600 scoreboard would stall SUBD until ADDD reads operands

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Three Parts of the Scoreboard °Instruction status: Which of 4 steps the instruction is in °Functional unit status:—Indicates the state of the functional unit (FU). 9 fields for each functional unit Busy:Indicates whether the unit is busy or not Op:Operation to perform in the unit (e.g., + or –) Fi:Destination register Fj,Fk:Source-register numbers Qj,Qk:Functional units producing source registers Fj, Fk Rj,Rk:Flags indicating when Fj, Fk are ready °Register result status—Indicates which functional unit will write each register, if one exists. Blank when no pending instructions will write that register

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 CDC 6600 Scoreboard °Speedup 1.7 from compiler; 2.5 by hand BUT slow memory (no cache) limits benefit °Limitations of 6600 scoreboard: No forwarding hardware Limited to instructions in basic block (small window) Small number of functional units (structural hazards), especially integer/load store units Do not issue on structural hazards Wait for WAR hazards Prevent WAW hazards

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Summary #1/2: Compiler techniques for parallelism °Loop unrolling  Multiple iterations of loop in software: Amortizes loop overhead over several iterations Gives more opportunity for scheduling around stalls °Software Pipelining  Take one instruction from each of several iterations of the loop Software overlapping of loop iterations Today will show hardware overlapping of loop iterations °Very Long Instruction Word machines (VLIW)  Multiple operations coded in single, long instruction Requires sophisticated compiler to decide which operations can be done in parallel Trace scheduling  find common path and schedule code as if branches didn’t exist (+ add “fixup code”) °All of these require additional registers

CS152 / Kubiatowicz Lec /15/04©UCB Spring 2004 Summary #2/2 °HW exploiting ILP Works when can’t know dependence at compile time. Code for one machine runs well on another °Key idea of Scoreboard: Allow instructions behind stall to proceed (Decode => Issue instr & read operands) Enables out-of-order execution => out-of-order completion ID stage checked both for structural & data dependencies Original version didn’t handle forwarding. No automatic register renaming