15-740/ Computer Architecture Lecture 10: Out-of-Order Execution

Slides:



Advertisements
Similar presentations
Spring 2003CSE P5481 Out-of-Order Execution Several implementations out-of-order completion CDC 6600 with scoreboarding IBM 360/91 with Tomasulos algorithm.
Advertisements

Real-time Signal Processing on Embedded Systems Advanced Cutting-edge Research Seminar I&III.
Computer Structure 2014 – Out-Of-Order Execution 1 Computer Structure Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 3 Instruction-Level Parallelism and Its Exploitation Computer Architecture A Quantitative.
Dynamic ILP: Scoreboard Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
CS6290 Tomasulo’s Algorithm. Implementing Dynamic Scheduling Tomasulo’s Algorithm –Used in IBM 360/91 (in the 60s) –Tracks when operands are available.
COMP25212 Advanced Pipelining Out of Order Processors.
ECE 2162 Tomasulo’s Algorithm. Implementing Dynamic Scheduling Tomasulo’s Algorithm –Used in IBM 360/91 (in the 60s) –Tracks when operands are available.
Microprocessor Microarchitecture Dependency and OOO Execution Lynn Choi Dept. Of Computer and Electronics Engineering.
Spring 2003CSE P5481 Reorder Buffer Implementation (Pentium Pro) Hardware data structures retirement register file (RRF) (~ IBM 360/91 physical registers)
CS 211: Computer Architecture Lecture 5 Instruction Level Parallelism and Its Dynamic Exploitation Instructor: M. Lancaster Corresponding to Hennessey.
CPE 731 Advanced Computer Architecture ILP: Part IV – Speculative Execution Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University.
DATAFLOW ARHITEKTURE. Dataflow Processors - Motivation In basic processor pipelining hazards limit performance –Structural hazards –Data hazards due to.
1 IBM System 360. Common architecture for a set of machines. Robert Tomasulo worked on a high-end machine, the Model 91 (1967), on which they implemented.
OOO execution © Avi Mendelson, 4/ MAMAS – Computer Architecture Lecture 7 – Out Of Order (OOO) Avi Mendelson Some of the slides were taken.
1 Sixth Lecture: Chapter 3: CISC Processors (Tomasulo Scheduling and IBM System 360/91) Please recall:  Multicycle instructions lead to the requirement.
1 Advanced Computer Architecture Dynamic Instruction Level Parallelism Lecture 2.
Trace cache and Back-end Oper. CSE 4711 Instruction Fetch Unit Using I-cache I-cache I-TLB Decoder Branch Pred Register renaming Execution units.
© Wen-mei Hwu and S. J. Patel, 2005 ECE 412, University of Illinois Lecture Instruction Execution: Dynamic Scheduling.
Computer Architecture: Out-of-Order Execution
Spring 2003CSE P5481 Precise Interrupts Precise interrupts preserve the model that instructions execute in program-generated order, one at a time If an.
1 Lecture 7: Speculative Execution and Recovery Branch prediction and speculative execution, precise interrupt, reorder buffer.
Computer Architecture: Out-of-Order Execution II
15-740/ Computer Architecture Lecture 12: Issues in OoO Execution Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 10/7/2011.
Samira Khan University of Virginia Feb 9, 2016 COMPUTER ARCHITECTURE CS 6354 Precise Exception The content and concept of this course are adapted from.
CSE431 L13 SS Execute & Commit.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 13: SS Backend (Execute, Writeback & Commit) Mary Jane.
CS203 – Advanced Computer Architecture ILP and Speculation.
15-740/ Computer Architecture Lecture 7: Out-of-Order Execution Prof. Onur Mutlu Carnegie Mellon University.
IBM System 360. Common architecture for a set of machines
CS 352H: Computer Systems Architecture
Dynamic Scheduling Why go out of style?
Precise Exceptions and Out-of-Order Execution
Computer Architecture Lecture 14: Out-of-Order Execution
Design of Digital Circuits Lecture 18: Out-of-Order Execution
/ Computer Architecture and Design
Prof. Onur Mutlu Carnegie Mellon University
Smruti R. Sarangi IIT Delhi
Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 2/16/2015
Tomasulo’s Algorithm Born of necessity
PowerPC 604 Superscalar Microprocessor
CIS-550 Advanced Computer Architecture Lecture 10: Precise Exceptions
15-740/ Computer Architecture Lecture 7: Pipelining
Out of Order Processors
Dynamic Scheduling and Speculation
CS203 – Advanced Computer Architecture
Microprocessor Microarchitecture Dynamic Pipeline
Prof. Onur Mutlu Carnegie Mellon University Spring 2014, 2/21/2014
Advantages of Dynamic Scheduling
Sequential Execution Semantics
High-level view Out-of-order pipeline
Out of Order Processors
Lecture 8: ILP and Speculation Contd. Chapter 2, Sections 2. 6, 2
Smruti R. Sarangi IIT Delhi
Adapted from the slides of Prof
15-740/ Computer Architecture Lecture 5: Precise Exceptions
Checking for issue/dispatch
Lecture 7: Dynamic Scheduling with Tomasulo Algorithm (Section 2.4)
15-740/ Computer Architecture Lecture 14: Runahead Execution
CS5100 Advanced Computer Architecture Dynamic Scheduling
Adapted from the slides of Prof
Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 9/30/2011
Prof. Onur Mutlu Carnegie Mellon University
September 20, 2000 Prof. John Kubiatowicz
Lecture 7 Dynamic Scheduling
Conceptual execution on a processor which exploits ILP
Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 2/15/2013
Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 2/11/2015
Design of Digital Circuits Lecture 16: Out-of-Order Execution
Prof. Onur Mutlu Carnegie Mellon University
Prof. Onur Mutlu Carnegie Mellon University
Presentation transcript:

15-740/18-740 Computer Architecture Lecture 10: Out-of-Order Execution Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 10/3/2011

Review: Solutions to Enable Precise Exceptions Reorder buffer History buffer Future register file Checkpointing Reading Smith and Plezskun, “Implementing Precise Interrupts in Pipelined Processors” IEEE Trans on Computers 1988 and ISCA 1985. Hwu and Patt, “Checkpoint Repair for Out-of-order Execution Machines,” ISCA 1987.

More on Last Lecture Handling of branch mispredictions Handling of stores and loads

Today Out-of-order execution

Pipelining Issues: Branch Mispredictions A branch misprediction resembles an “exception” Except it is not visible to software What about branch misprediction recovery? Similar to exception handling except can be initiated before the branch is the oldest instruction All three state recovery methods can be used Difference between exceptions and branch mispredictions? Branch mispredictions more common: need fast recovery

Pipelining Issues: Stores Handling out-of-order completion of memory operations UNDOing a memory write more difficult than UNDOing a register write. Why? One idea: Keep store address/data in reorder buffer How does a load instruction find its data? Store/write buffer: Similar to reorder buffer, but used only for store instructions Program-order list of un-committed store operations When store is decoded: Allocate a store buffer entry When store address and data become available: Record in store buffer entry When the store is the oldest instruction in the pipeline: Update the memory address (i.e. cache) with store data

Summary: Precise Exceptions in Pipelining When the oldest instruction ready-to-be-retired is detected to have caused an exception, the control logic Recovers architectural state (register file, IP, and memory) Flushes all younger instructions in the pipeline Saves IP and registers (as specified by the ISA) Redirects the fetch engine to the exception handling routine

Putting It Together: In-Order Pipeline with Future File Decode (D): Access future file, allocate entry in reorder buffer, store buffer, check if instruction can execute, if so dispatch instruction Execute (E): Instructions can complete out-of-order, store-load dependencies determined Completion (R): Write result to reorder/store buffer Retirement/Commit (W): Write result to architectural register file or memory In-order dispatch/execution, out-of-order completion, in-order retirement Integer add E Integer mul E R W F D FP mul E E . . . Load/store

Out-of-Order Execution (Dynamic Instruction Scheduling)

Review: In-order pipeline Integer add E Integer mul E W F D FP mul E E . . . Cache miss Problem: A true data dependency stalls dispatch of younger instructions into functional (execution) units Dispatch: Act of sending an instruction to a functional unit

Can We Do Better? What do the following two pieces of code have in common (with respect to execution in the previous design)? Answer: First ADD stalls the whole pipeline! ADD cannot dispatch because its source registers unavailable Later independent instructions cannot get executed How are the above code portions different? Answer: Load latency is variable (unknown until runtime) What does this affect? Think compiler vs. microarchitecture IMUL R3  R1, R2 ADD R3  R3, R1 ADD R1  R6, R7 IMUL R3  R6, R8 ADD R7  R3, R9 LD R3  R1 (0) ADD R3  R3, R1 ADD R1  R6, R7 IMUL R3  R6, R8 ADD R7  R3, R9

Preventing Dispatch Stalls Multiple ways of doing it You have already seen THREE: 1. Fine-grained multithreading 2. Value prediction 3. Compile-time instruction scheduling/reordering What are the disadvantages of the above three? Any other way to prevent dispatch stalls? Actually, you have briefly seen the basic idea before Dataflow: fetch and “fire” an instruction when its inputs are ready Problem: in-order dispatch (issue, execution) Solution: out-of-order dispatch (issue, execution)

Terminology Issue vs. dispatch Scheduling Execution, completion, retirement/commit Graduation Out-of-order execution versus superscalar processing

Out-of-order Execution (Dynamic Scheduling) Idea: Move the dependent instructions out of the way of independent ones Rest areas for dependent instructions: Reservation stations Monitor the source “values” of each instruction in the resting area When all source “values” of an instruction are available, “fire” (i.e. dispatch) the instruction Instructions dispatched in dataflow (not control-flow) order Benefit: Latency tolerance: Allows independent instructions to execute and complete in the presence of a long latency operation

In-order vs. Out-of-order Dispatch In order dispatch: Tomasulo + precise exceptions: 16 vs. 12 cycles IMUL R3  R1, R2 ADD R3  R3, R1 ADD R1  R6, R7 IMUL R3  R6, R8 ADD R7  R3, R9 F D E R W F D STALL E R W F STALL D E R W F D E E R W F D STALL E R W F D E R W F D WAIT E R W F D E R W F D E R W F D WAIT E R W

Enabling OoO Execution 1. Need to link the consumer of a value to the producer Register renaming: Associate a “tag” with each data value 2. Need to buffer instructions until they are ready Insert instruction into reservation stations after renaming 3. Instructions need to keep track of readiness of source values Broadcast the “tag” when the value is produced Instructions compare their “source tags” to the broadcast tag  if match, source value becomes ready 4. When all source values of an instruction are ready, dispatch the instruction to functional unit (FU) What if more instructions become ready than available FUs?

Tomasulo’s Algorithm OoO with register renaming invented by Robert Tomasulo Used in IBM 360/91 Floating Point Units Read: Tomasulo, “An Efficient Algorithm for Exploiting Multiple Arithmetic Units,” IBM Journal of R&D, Jan. 1967. Variants of it used in most high-performance processors Most notably Pentium Pro, Pentium M, Intel Core(2), AMD K series, Alpha 21264, MIPS R10000, IBM POWER5 What is the major difference today? Precise exceptions: IBM 360/91 did NOT have this Patt, Hwu, Shebanow, “HPS, a new microarchitecture: rationale and introduction,” MICRO 1985. Patt et al., “Critical issues regarding HPS, a high performance microarchitecture,” MICRO 1985.

Two Humps in a Modern Pipeline Hump 1: Reservation stations (scheduling window) Hump 2: Reordering (reorder buffer, aka instruction window or active window) TAG and VALUE Broadcast Bus S C H E D U L R E O D Integer add E Integer mul E F D W FP mul E E . . . Load/store in order out of order in order

General Organization of an OOO Processor Smith and Sohi, “The Microarchitecture of Superscalar Processors,” Proc. IEEE, Dec. 1995.

Tomasulo’s Machine: IBM 360/91 FP registers from instruction unit from memory load buffers store buffers operation bus reservation stations to memory FP FU FP FU Common data bus

Register Renaming Output and anti dependencies are not true dependencies WHY? The same register refers to values that have nothing to do with each other They exist because not enough register ID’s (i.e. names) in the ISA The register ID is renamed to the reservation station entry that will hold the register’s value Register ID  RS entry ID Architectural register ID  Physical register ID After renaming, RS entry ID used to refer to the register This eliminates anti- and output- dependencies Approximates the performance effect of a large number of registers even though ISA has a small number

Tomasulo’s Algorithm: Renaming Register rename table (register alias table) tag value valid? R0 1 R1 1 R2 1 R3 1 R4 1 R5 1 R6 1 R7 1 R8 1 R9 1

Tomasulo’s Algorithm If reservation station available before renaming Instruction + renamed operands (source value/tag) inserted into the reservation station Only rename if reservation station is available Else stall While in reservation station, each instruction: Watches common data bus (CDB) for tag of its sources When tag seen, grab value for the source and keep it in the reservation station When both operands available, instruction ready to be dispatched Dispatch instruction to the Functional Unit when instruction is ready After instruction finishes in the Functional Unit Arbitrate for CDB Put tagged value onto CDB (tag broadcast) Register file is connected to the CDB Register contains a tag indicating the latest writer to the register If the tag in the register file matches the broadcast tag, write broadcast value into register (and set valid bit) Reclaim rename tag no valid copy of tag in system!

The Remaining Slides Not Covered and Are for Your Benefit for the Next Lecture

Register Renaming and OoO Execution Architectural registers dynamically renamed Mapped to reservation stations tag value valid? IMUL R3  R1, R2 ADD R3  R3, R1 ADD R1  R6, R7 IMUL R3  R6, R8 ADD R7  R3, R9 IMUL S0  V1, V2 R0 - V0 1 ADD S1  S0, V4 R1 S2 - new1V1 V1 1 1 ADD S2  V6, V7 R2 - V2 1 IMUL S3  V6, V8 ADD S4  S3, V9 R3 S0 S3 S1 - new3V3 V3 1 1 R4 - V4 1 Src1 tag Src1 value V? Src2 tag Src2 value V? Ctl S? R5 - V5 1 S0 - BROADCAST S0 and new1V3 Completed --- Wait for Retirement Retired --- Entry Deallocated V1 1 - V2 1 mul 1 R6 - V6 1 S1 S0 Completed --- Wait for Retirement Retired --- Entry Deallocated BROADCAST S1 and new2V3 new2V3 V3 1 - V1 1 add 1 R7 S4 - V7 1 S2 - Completed --- Wait for Retirement BROADCAST S2 and new1V1 V6 1 - V7 1 add 1 R8 - V8 1 S3 - BROADCAST S3 and new3V3 V6 1 - V8 1 add 1 R9 - V9 1 S4 S3 new3V3 V3 1 - V9 1 add

Summary of OOO Execution Concepts Renaming eliminates false dependencies Tag broadcast enables value communication between instructions  dataflow An out-of-order engine dynamically builds the dataflow graph of a piece of the program which piece? Limited to the instruction window Can we do it for the whole program? Why would we like to? How can we have a large instruction window efficiently?