1/2/2019 CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenković, milenka@ece.uah.edu Electrical and Computer.

Slides:



Advertisements
Similar presentations
CMSC 611: Advanced Computer Architecture Tomasulo Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
Advertisements

Scoreboarding & Tomasulos Approach Bazat pe slide-urile lui Vincent H. Berk.
Hardware-Based Speculation. Exploiting More ILP Branch prediction reduces stalls but may not be sufficient to generate the desired amount of ILP One way.
Instruction-level Parallelism Compiler Perspectives on Code Movement dependencies are a property of code, whether or not it is a HW hazard depends on.
Lec18.1 Step by step for Dynamic Scheduling by reorder buffer Copyright by John Kubiatowicz (http.cs.berkeley.edu/~kubitron)
A scheme to overcome data hazards
Dynamic ILP: Scoreboard Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
Lecture 6: ILP HW Case Study— CDC 6600 Scoreboard & Tomasulo’s Algorithm Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
COMP25212 Advanced Pipelining Out of Order Processors.
CMSC 611: Advanced Computer Architecture Scoreboard Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
CPE 731 Advanced Computer Architecture ILP: Part IV – Speculative Execution Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University.
1 IBM System 360. Common architecture for a set of machines. Robert Tomasulo worked on a high-end machine, the Model 91 (1967), on which they implemented.
ENGS 116 Lecture 71 Scoreboarding Vincent H. Berk October 8, 2008 Reading for today: A.5 – A.6, article: Smith&Pleszkun FRIDAY: NO CLASS Reading for Monday:
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Oct 5, 2005 Topic: Instruction-Level Parallelism (Dynamic Scheduling: Scoreboarding)
Nov. 9, Lecture 6: Dynamic Scheduling with Scoreboarding and Tomasulo Algorithm (Section 2.4)
Out-of-order execution: Scoreboarding and Tomasulo Week 2
2/24; 3/1,3/11 (quiz was 2/22, QuizAns 3/8) CSE502-S11, Lec ILP 1 Tomasulo Organization FP adders Add1 Add2 Add3 FP multipliers Mult1 Mult2 From.
1 Lecture 7: Speculative Execution and Recovery Branch prediction and speculative execution, precise interrupt, reorder buffer.
1 Images from Patterson-Hennessy Book Machines that introduced pipelining and instruction-level parallelism. Clockwise from top: IBM Stretch, IBM 360/91,
Chapter 3 Instruction Level Parallelism Dr. Eng. Amr T. Abdel-Hamid Elect 707 Spring 2011 Computer Applications Text book slides: Computer Architec ture:
04/03/2016 slide 1 Dynamic instruction scheduling Key idea: allow subsequent independent instructions to proceed DIVDF0,F2,F4; takes long time ADDDF10,F0,F8;
COMP25212 Advanced Pipelining Out of Order Processors.
CS203 – Advanced Computer Architecture ILP and Speculation.
Ch2. Instruction-Level Parallelism & Its Exploitation 2. Dynamic Scheduling ECE562/468 Advanced Computer Architecture Prof. Honggang Wang ECE Department.
Sections 3.2 and 3.3 Dynamic Scheduling – Tomasulo’s Algorithm 吳俊興 高雄大學資訊工程學系 October 2004 EEF011 Computer Architecture 計算機結構.
Instruction-Level Parallelism and Its Dynamic Exploitation
IBM System 360. Common architecture for a set of machines
Images from Patterson-Hennessy Book
/ Computer Architecture and Design
Out of Order Processors
Step by step for Tomasulo Scheme
Tomasulo Loop Example Loop: LD F0 0 R1 MULTD F4 F0 F2 SD F4 0 R1
CS203 – Advanced Computer Architecture
CSE 520 Computer Architecture Lec Chapter 2 - DS-Tomasulo
Lecture 6 Score Board And Tomasulo’s Algorithm
9/18/2018 CPE 631 Lecture 09: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenković, Electrical and Computer.
Lecture 3: Introduction to Advanced Pipelining
March 11, 2002 Prof. David E. Culler Computer Science 252 Spring 2002
Chapter 3: ILP and Its Exploitation
Advantages of Dynamic Scheduling
High-level view Out-of-order pipeline
11/14/2018 CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenković, Electrical and Computer.
CMSC 611: Advanced Computer Architecture
A Dynamic Algorithm: Tomasulo’s
COMP s1 Seminar 3: Dynamic Scheduling
Out of Order Processors
Last Week Talks Any feedback from the talks? What did you like?
CS252 Graduate Computer Architecture Lecture 6 Scoreboard, Tomasulo, Register Renaming February 7th, 2011 John Kubiatowicz Electrical Engineering and.
Lecture 8: ILP and Speculation Contd. Chapter 2, Sections 2. 6, 2
John Kubiatowicz (http.cs.berkeley.edu/~kubitron)
CS 704 Advanced Computer Architecture
12/2/2018 CPE 631 Lecture 11: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenković, Electrical and Computer.
Adapted from the slides of Prof
Lecture 7: Dynamic Scheduling with Tomasulo Algorithm (Section 2.4)
March 11, 2002 Prof. David E. Culler Computer Science 252 Spring 2002
CSCE430/830 Computer Architecture
September 20, 2000 Prof. John Kubiatowicz
Tomasulo Organization
Reduction of Data Hazards Stalls with Dynamic Scheduling
Adapted from the slides of Prof
Lecture 5 Scoreboarding: Enforce Register Data Dependence
CS152 Computer Architecture and Engineering Lecture 16 Compiler Optimizations (Cont) Dynamic Scheduling with Scoreboards.
CS252 Graduate Computer Architecture Lecture 6 Tomasulo, Implicit Register Renaming, Loop-Level Parallelism Extraction Explicit Register Renaming February.
Scoreboarding ENGS 116 Lecture 7 Vincent H. Berk October 5, 2005
John Kubiatowicz (http.cs.berkeley.edu/~kubitron)
September 20, 2000 Prof. John Kubiatowicz
CS252 Graduate Computer Architecture Lecture 6 Introduction to Advanced Pipelining: Out-Of-Order Execution John Kubiatowicz Electrical Engineering and.
High-level view Out-of-order pipeline
Lecture 7 Dynamic Scheduling
Conceptual execution on a processor which exploits ILP
Presentation transcript:

1/2/2019 CPE 631 Lecture 10: Instruction Level Parallelism and Its Dynamic Exploitation Aleksandar Milenković, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovich

Instruction Level Parallelism (ILP) Recap: Data Dependencies 1/2/2019 Outline Instruction Level Parallelism (ILP) Recap: Data Dependencies Extended MIPS Pipeline and Hazards Dynamic scheduling with a scoreboard Here is the outline of today’s lecture. First, we will discuss instruction level parallelism in general. Then, we will discuss techniques that reduce the impact of data and control hazards like: - compiler techniques to increase the amount of parallelism, and - dynamic scheduling with scoreboarding. Last, we will see how multiple issue processors can decrease CPI. 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

ILP: Concepts and Challenges 1/2/2019 ILP: Concepts and Challenges ILP (Instruction Level Parallelism) – overlap execution of unrelated instructions Techniques that increase amount of parallelism exploited among instructions reduce impact of data and control hazards increase processor ability to exploit parallelism Pipeline CPI = Ideal pipeline CPI + Structural stalls + RAW stalls + WAR stalls + WAW stalls + Control stalls Reducing each of the terms of the right-hand side minimize CPI and thus increase instruction throughput The potential to overlap the execution of the unrelated instructions is called Instruction Level Parallelism (ILP). Here, we are particularly interested in techniques that increase amount of parallelism through - reducing the impact of data and control hazards, and - increasing the processor ability to exploit parallelism. The CPI of a pipelined machine is the sum of the base CPI and all contributors from stalls: .... By reducing each of the terms of the right-hand side we can minimise CPI and thus increase instruction throughput. 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

Two approaches to exploit parallelism 1/2/2019 Two approaches to exploit parallelism Dynamic techniques largely depend on hardware to locate the parallelism Static techniques relay on software 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

Techniques to exploit parallelism 1/2/2019 Techniques to exploit parallelism Technique (Section in the textbook) Reduces Forwarding and bypassing (Section A.2) Data hazard (DH) stalls Delayed branches (A.2) Control hazard stalls Basic dynamic scheduling (A.8) DH stalls (RAW) Dynamic scheduling with register renaming (3.2) WAR and WAW stalls Dynamic branch prediction (3.4) CH stalls Issuing multiple instruction per cycle (3.6) Ideal CPI Speculation (3.7) Data and control stalls Dynamic memory disambiguation (3.2, 3.7) RAW stalls w. memory Loop Unrolling (4.1) Basic compiler pipeline scheduling (A.2, 4.1) DH stalls Compiler dependence analysis (4.4) Ideal CPI, DH stalls Software pipelining and trace scheduling (4.3) Ideal CPI and DH stalls Compiler speculation (4.4) Ideal CPI, and D/CH stalls 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

Dynamically Scheduled Pipelines

Overcoming Data Hazards with Dynamic Scheduling 1/2/2019 Overcoming Data Hazards with Dynamic Scheduling Why in HW at run time? Works when can’t know real dependence at compile time Simpler compiler Code for one machine runs well on another Example Key idea: Allow instructions behind stall to proceed SUB.D cannot execute because the dependence of ADD.D on DIV.D causes the pipeline to stall; yet SUBD is not data dependent on anything! DIV.D F0,F2,F4 ADD.D F10,F0,F8 SUB.D F12,F8,F12 Data dependences or true dependences are real problem. So far we have discussed how to cope with them. In our DLX pipeline during ID stage our hardware checks for data dependeces. If there is data dependence that can be solved by forwarding the current instruction is issued. However, if there is a data dependence that cannot be hidden, then the hazard detection hardware stalls the pipeline (from the current instruction, while instructions which enter the pipeline before continue their execution) until the hazard is resolved. Also, we examined the compiler techniques for scheduling instructions so as to separate dependent instructions and minimise the number of actual hazards and resultant stalls. Here we will give a very short overview of dynamic scheduling. It offers several advantages.... .... 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

Overcoming Data Hazards with Dynamic Scheduling (cont’d) 1/2/2019 Overcoming Data Hazards with Dynamic Scheduling (cont’d) Enables out-of-order execution => out-of-order completion Out-of-order execution divides ID stage: 1. Issue—decode instructions, check for structural hazards 2. Read operands—wait until no data hazards, then read operands Scoreboarding – technique for allowing instructions to execute out of order when there are sufficient resources and no data dependencies (CDC 6600, 1963) In original MIPS pipeline, both structural and data hazards were checked during ID. When an instruction could execute properly, it was issued from ID. To allow us to begin executing the SUBD in the example, we must separate the issue process into 2 parts... 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

Scoreboarding Implications 1/2/2019 Scoreboarding Implications Out-of-order completion => WAR, WAW hazards? Solutions for WAR Queue both the operation and copies of its operands Read registers only during Read Operands stage For WAW, must detect hazard: stall until other completes Need to have multiple instructions in execution phase => multiple execution units or pipelined execution units Scoreboard keeps track of dependencies, state or operations Scoreboard replaces ID, EX, WB with 4 stages DIV.D F0,F2,F4 ADD.D F10,F0,F8 SUB.D F8,F8,F12 DIV.D F0,F2,F4 ADD.D F10,F0,F8 SUB.D F10,F8,F12 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

Four Stages of Scoreboard Control 1/2/2019 Four Stages of Scoreboard Control ID1: Issue — decode instructions & check for structural hazards ID2: Read operands — wait until no data hazards, then read operands EX: Execute — operate on operands; when the result is ready, it notifies the scoreboard that it has completed execution WB: Write results — finish execution; the scoreboard checks for WAR hazards. If none, it writes results. If WAR, then it stalls the instruction Scoreboarding stalls the the SUBD in its write result stage until ADDD reads its operands DIV.D F0,F2,F4 ADD.D F10,F0,F8 SUB.D F8,F8,F12 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

Four Stages of Scoreboard Control 1. Issue—decode instructions & check for structural hazards (ID1) If a functional unit for the instruction is free and no other active instruction has the same destination register (WAW), the scoreboard issues the instruction to the functional unit and updates its internal data structure. If a structural or WAW hazard exists, then the instruction issue stalls, and no further instructions will issue until these hazards are cleared. 2. Read operands—wait until no data hazards, then read operands (ID2) A source operand is available if no earlier issued active instruction is going to write it, or if the register containing the operand is being written by a currently active functional unit. When the source operands are available, the scoreboard tells the functional unit to proceed to read the operands from the registers and begin execution. The scoreboard resolves RAW hazards dynamically in this step, and instructions may be sent into execution out of order. 02/01/2019 UAH-CPE631

Four Stages of Scoreboard Control 3. Execution—operate on operands (EX) The functional unit begins execution upon receiving operands. When the result is ready, it notifies the scoreboard that it has completed execution. 4. Write result—finish execution (WB) Once the scoreboard is aware that the functional unit has completed execution, the scoreboard checks for WAR hazards. If none, it writes results. If WAR, then it stalls the instruction. Example: CDC 6600 scoreboard would stall SUBD until ADD.D reads operands DIV.D F0,F2,F4 ADD.D F10,F0,F8 SUB.D F8,F8,F14 02/01/2019 UAH-CPE631

Three Parts of the Scoreboard 1/2/2019 Three Parts of the Scoreboard 1. Instruction status—which of 4 steps the instruction is in (Capacity = window size) 2. Functional unit status—Indicates the state of the functional unit (FU). 9 fields for each functional unit Busy—Indicates whether the unit is busy or not Op—Operation to perform in the unit (e.g., + or –) Fi—Destination register Fj, Fk—Source-register numbers Qj, Qk—Functional units producing source registers Fj, Fk Rj, Rk—Flags indicating when Fj, Fk are ready 3. Register result status—Indicates which functional unit will write each register, if one exists. Blank when no pending instructions will write that register What you might have thought 1. 4 stages of instruction execution 2.Status of FU: Normal things to keep track of (RAW & structural for busyl): Fi from instruction format of the mahine (Fi is dest) Add unit can Add or Sub Rj, Rk - status of registers (Yes means ready) Qj,Qk - If a no in Rj, Rk, means waiting for a FU to write result; Qj, Qk means wihch FU waiting for it 3.Status of register result (WAW &WAR)s: which FU is going to write into registers Scoreboard on 6600 = size of FU 6.7, 6.8, 6.9, 6.12, 6.13, 6.16, 6.17 FU latencies: Add 2, Mult 10, Div 40 clocks 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

MIPS with a Scoreboard Registers Control/ Status Control/ Status FP Mult FP Mult FP Div FP Div FP Div Add1 Add2 Add3 Control/ Status Control/ Status Scoreboard 02/01/2019 UAH-CPE631

Detailed Scoreboard Pipeline Control 1/2/2019 Detailed Scoreboard Pipeline Control Read operands Execution complete Instruction status Write result Issue Bookkeeping Rj No; Rk No f(if Qj(f)=FU then Rj(f) Yes); f(if Qk(f)=FU then Rj(f) Yes); Result(Fi(FU)) 0; Busy(FU) No Busy(FU) yes; Op(FU) op; Fi(FU) ’D’; Fj(FU) ’S1’; Fk(FU) ’S2’; Qj Result(’S1’); Qk Result(’S2’); Rj not Qj; Rk not Qk; Result(’D’) FU; Rj and Rk Functional unit done Wait until f((Fj( f )≠Fi(FU) or Rj( f )=No) & (Fk( f ) ≠Fi(FU) or Rk( f )=No)) Not busy (FU) and not result (D) 1.Issue - if no structural haards AND non wAW (no Funtional Unit is going to write this destination register; 1 per clock cycle 2. Read -(RAW) if no instructions is going to write a source register of this instruction (alternatively, no write signal this clock cycle) +> gein exection of the instruction; many read ports, so can read many times 3. Execution Complete; multiple during clock cyle 4. Write result - (WAR) If no instructiion is watiing to read the destination register; assume multiple wriite ports; wait for clock cycle to write and tehn read the results; assume can oerlap issue & write show clock cyclesneed 20 or so Latency: minimum is 4 through 4 stages 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

Scoreboard Example 02/01/2019 UAH-CPE631

Dynamic scheduling to minimise stalls 1/2/2019 Things to Remember Pipeline CPI = Ideal pipeline CPI + Structural stalls + RAW stalls + WAR stalls + WAW stalls + Control stalls Data dependencies Dynamic scheduling to minimise stalls Dynamic scheduling with a scoreboard 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

Scoreboard Example 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 1 Issue 1st L.D! 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 2 Issue 2nd L.D? Structural hazard! No further instructions will issue! 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 3 Issue MUL.D? 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 4 Check for WAR hazards! If none, write result! 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 5 Issue 2nd L.D! 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 6 Issue MUL.D! 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 7 Issue SUB.D! 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 8 Issue DIV.D! 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 9 Read operands for MUL.D and SUB.D! Assume we can feed Mult1 and Add units in the same clock cycle. Issue ADD.D? Structural Hazard (unit is busy)! 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 11 Last cycle of SUB.D execution. 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 12 Check WAR on F8. Write F8. 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 13 Issue ADD.D! 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 14 Read operands for ADD.D! 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 15 Read operands for ADD.D! 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 16 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 17 Why cannot write F6? 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 19 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 20 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 21 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 22 Write F6? 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 61 Write F6? 02/01/2019 UAH-CPE631

Scoreboard Example: Cycle 62 Write F6? 02/01/2019 UAH-CPE631

Scoreboard Results For the CDC 6600 Still this was in ancient time 70% improvement for Fortran 150% improvement for hand coded assembly language cost was similar to one of the functional units surprisingly low bulk of cost was in the extra busses Still this was in ancient time no caches & no main semiconductor memory no software pipelining compilers? So, why is it coming back performance via ILP 02/01/2019 UAH-CPE631

Scoreboard Limitations Amount of parallelism among instructions can we find independent instructions to execute Number of scoreboard entries how far ahead the pipeline can look for independent instructions (we assume a window does not extend beyond a branch) Number and types of functional units avoid structural hazards Presence of antidependences and output dependences WAR and WAW stalls become more important 02/01/2019 UAH-CPE631

Tomasulo’s Algorithm Used in IBM 360/91 FPU (before caches) Goal: high FP performance without special compilers Conditions: Small number of floating point registers (4 in 360) prevented interesting compiler scheduling of operations Long memory accesses and long FP delays This led Tomasulo to try to figure out how to get more effective registers — renaming in hardware! Why Study 1966 Computer? The descendants of this have flourished! Alpha 21264, HP 8000, MIPS 10000, Pentium III, PowerPC 604, … 02/01/2019 UAH-CPE631

Tomasulo’s Algorithm (cont’d) Control & buffers distributed with Function Units (FU) FU buffers called “reservation stations” => buffer the operands of instructions waiting to issue; Registers in instructions replaced by values or pointers to reservation stations (RS) => register renaming avoids WAR, WAW hazards More reservation stations than registers, so can do optimizations compilers can’t Results to FU from RS, not through registers, over Common Data Bus that broadcasts results to all FUs Load and Stores treated as FUs with RSs as well Integer instructions can go past branches, allowing FP ops beyond basic block in FP queue 02/01/2019 UAH-CPE631

Tomasulo-based FPU for MIPS 1/2/2019 Tomasulo-based FPU for MIPS From Instruction Unit FP Registers From Mem FP Op Queue Load Buffers Load1 Load2 Load3 Load4 Load5 Load6 Store Buffers Store1 Store2 Store3 Add1 Add2 Add3 Mult1 Mult2 Resolve RAW memory conflict? (address in memory buffers) Integer unit executes in parallel Reservation Stations To Mem FP adders FP multipliers Common Data Bus (CDB) 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

Reservation Station Components 1/2/2019 Reservation Station Components Op: Operation to perform in the unit (e.g., + or –) Vj, Vk: Value of Source operands Store buffers has V field, result to be stored Qj, Qk: Reservation stations producing source registers (value to be written) Note: Qj/Qk=0 => source operand is already available in Vj /Vk Store buffers only have Qi for RS producing result Busy: Indicates reservation station or FU is busy Register result status—Indicates which functional unit will write each register, if one exists. Blank when no pending instructions that will write that register. What you might have thought 1. 4 stages of instruction executino 2.Status of FU: Normal things to keep track of (RAW & structura for busyl): Fi from instruction format of the mahine (Fi is dest) Add unit can Add or Sub Rj, Rk - status of registers (Yes means ready) Qj,Qk - If a no in Rj, Rk, means waiting for a FU to write result; Qj, Qk means wihch FU waiting for it 3.Status of register result (WAW &WAR)s: which FU is going to write into registers Scoreboard on 6600 = size of FU 6.7, 6.8, 6.9, 6.12, 6.13, 6.16, 6.17 FU latencies: Add 2, Mult 10, Div 40 clocks 02/01/2019 UAH-CPE631 Aleksandar Milenkovich

Three Stages of Tomasulo Algorithm 1. Issue—get instruction from FP Op Queue If reservation station free (no structural hazard), control issues instr & sends operands (renames registers) 2. Execute—operate on operands (EX) When both operands ready then execute; if not ready, watch Common Data Bus for result 3. Write result—finish execution (WB) Write it on Common Data Bus to all awaiting units; mark reservation station available Normal data bus: data + destination (“go to” bus) Common data bus: data + source (“come from” bus) 64 bits of data + 4 bits of Functional Unit source address Write if matches expected Functional Unit (produces result) Does the broadcast Example speed: 2 clocks for Fl .pt. +,-; 10 for * ; 40 clks for / 02/01/2019 UAH-CPE631

Tomasulo Example Instruction stream 3 Load/Buffers FU count down 3 FP Adder R.S. 2 FP Mult R.S. Clock cycle counter 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 1 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 2 Note: Can have multiple loads outstanding 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 3 Note: registers names are removed (“renamed”) in Reservation Stations; MULT issued Load1 completing; what is waiting for Load1? 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 4 Load2 completing; what is waiting for Load2? 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 5 Timer starts down for Add1, Mult1 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 6 Issue ADDD here despite name dependency on F6? 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 7 Add1 (SUBD) completing; what is waiting for it? 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 8 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 9 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 10 Add2 (ADDD) completing; what is waiting for it? 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 11 Write result of ADDD here? All quick instructions complete in this cycle! 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 12 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 13 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 14 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 15 Mult1 (MULTD) completing; what is waiting for it? 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 16 Just waiting for Mult2 (DIVD) to complete 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 55 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 56 Mult2 (DIVD) is completing; what is waiting for it? 02/01/2019 UAH-CPE631

Tomasulo Example Cycle 57 Once again: In-order issue, out-of-order execution and out-of-order completion. 02/01/2019 UAH-CPE631

Many associative stores (CDB) at high speed Tomasulo Drawbacks Complexity delays of 360/91, MIPS 10000, Alpha 21264, IBM PPC 620 in CA:AQA 2/e, but not in silicon! Many associative stores (CDB) at high speed Performance limited by Common Data Bus Each CDB must go to multiple functional units  high capacitance, high wiring density Number of functional units that can complete per cycle limited to one! Multiple CDBs  more FU logic for parallel assoc stores Non-precise interrupts! We will address this later 02/01/2019 UAH-CPE631

This time assume Multiply takes 4 clocks Tomasulo Loop Example Loop: LD F0 0(R1) MULTD F4 F0 F2 SD F4 0 R1 SUBI R1 R1 #8 BNEZ R1 Loop This time assume Multiply takes 4 clocks Assume 1st load takes 8 clocks (L1 cache miss), 2nd load takes 1 clock (hit) To be clear, will show clocks for SUBI, BNEZ Reality: integer instructions ahead of Fl. Pt. Instructions Show 2 iterations 02/01/2019 UAH-CPE631

Value of Register used for address, iteration control Loop Example Iter- ation Count Added Store Buffers Instruction Loop Value of Register used for address, iteration control 02/01/2019 UAH-CPE631

Loop Example Cycle 1 02/01/2019 UAH-CPE631

Loop Example Cycle 2 02/01/2019 UAH-CPE631

Loop Example Cycle 3 Implicit renaming sets up data flow graph 02/01/2019 UAH-CPE631

Loop Example Cycle 4 02/01/2019 UAH-CPE631

Loop Example Cycle 5 02/01/2019 UAH-CPE631

Loop Example Cycle 6 02/01/2019 UAH-CPE631

Loop Example Cycle 7 02/01/2019 UAH-CPE631

Loop Example Cycle 8 02/01/2019 UAH-CPE631

Loop Example Cycle 9 02/01/2019 UAH-CPE631

Loop Example Cycle 10 02/01/2019 UAH-CPE631

Loop Example Cycle 11 02/01/2019 UAH-CPE631

Loop Example Cycle 12 02/01/2019 UAH-CPE631

Loop Example Cycle 13 02/01/2019 UAH-CPE631

Loop Example Cycle 14 02/01/2019 UAH-CPE631

Loop Example Cycle 15 02/01/2019 UAH-CPE631

Loop Example Cycle 16 02/01/2019 UAH-CPE631

Loop Example Cycle 17 02/01/2019 UAH-CPE631

Loop Example Cycle 18 02/01/2019 UAH-CPE631

Loop Example Cycle 19 02/01/2019 UAH-CPE631

Loop Example Cycle 20 Once again: In-order issue, out-of-order execution and out-of-order completion. 02/01/2019 UAH-CPE631

Why can Tomasulo overlap iterations of loops? Register renaming Multiple iterations use different physical destinations for registers (dynamic loop unrolling) Reservation stations Permit instruction issue to advance past integer control flow operations Also buffer old values of registers - totally avoiding the WAR stall that we saw in the scoreboard Other perspective: Tomasulo building data flow dependency graph on the fly 02/01/2019 UAH-CPE631

Tomasulo’s scheme offers 2 major advantages (1) the distribution of the hazard detection logic distributed reservation stations and the CDB If multiple instructions waiting on single result, & each instruction has other operand, then instructions can be released simultaneously by broadcast on CDB If a centralized register file were used, the units would have to read their results from the registers when register buses are available. (2) the elimination of stalls for WAW and WAR hazards 02/01/2019 UAH-CPE631

What about Precise Interrupts? Tomasulo had: In-order issue, out-of-order execution, and out-of-order completion Need to “fix” the out-of-order completion aspect so that we can find precise breakpoint in instruction stream 02/01/2019 UAH-CPE631

Relationship between precise interrupts and speculation Speculation is a form of guessing Important for branch prediction: Need to “take our best shot” at predicting branch direction If we speculate and are wrong, need to back up and restart execution to point at which we predicted incorrectly: This is exactly same as precise exceptions! Technique for both precise interrupts/exceptions and speculation: in-order completion or commit 02/01/2019 UAH-CPE631

HW support for precise interrupts Need HW buffer for results of uncommitted instructions: reorder buffer 3 fields: instr, destination, value Use reorder buffer number instead of reservation station when execution completes Supplies operands between execution complete & commit (Reorder buffer can be operand source => more registers like RS) Instructions commit Once instruction commits, result is put into register As a result, easy to undo speculated instructions on mispredicted branches or exceptions Reorder Buffer FP Op Queue FP Regs Res Stations Res Stations FP Adder FP Adder 02/01/2019 UAH-CPE631

Four Steps of Speculative Tomasulo Algorithm 1. Issue—get instruction from FP Op Queue If reservation station and reorder buffer slot free, issue instr & send operands & reorder buffer no. for destination (this stage sometimes called “dispatch”) 2. Execution—operate on operands (EX) When both operands ready then execute; if not ready, watch CDB for result; when both in reservation station, execute; checks RAW (sometimes called “issue”) 3. Write result—finish execution (WB) Write on Common Data Bus to all awaiting FUs & reorder buffer; mark reservation station available. 4. Commit—update register with reorder result When instr. at head of reorder buffer & result present, update register with result (or store to memory) and remove instr from reorder buffer. Mispredicted branch flushes reorder buffer (sometimes called “graduation”) 02/01/2019 UAH-CPE631

What are the hardware complexities with reorder buffer (ROB)? How do you find the latest version of a register? (As specified by Smith paper) need associative comparison network Could use future file or just use the register result status buffer to track which specific reorder buffer has received the value Need as many ports on ROB as register file Reorder Buffer FP Op Queue FP Adder Res Stations FP Regs Compar network Reorder Table Dest Reg Result Exceptions? Valid Program Counter 02/01/2019 UAH-CPE631

Summary Reservations stations: implicit register renaming to larger set of registers + buffering source operands Prevents registers as bottleneck Avoids WAR, WAW hazards of Scoreboard Allows loop unrolling in HW Not limited to basic blocks (integer units gets ahead, beyond branches) Today, helps cache misses as well Don’t stall for L1 Data cache miss (insufficient ILP for L2 miss?) Lasting Contributions Dynamic scheduling Register renaming Load/store disambiguation 360/91 descendants are Pentium III; PowerPC 604; MIPS R10000; HP-PA 8000; Alpha 21264 02/01/2019 UAH-CPE631