1 A Superscalar Pipeline [Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005 and Instruction Issue Logic, IEEETC, 39:3, Sohi,

Slides:



Advertisements
Similar presentations
Topics Left Superscalar machines IA64 / EPIC architecture
Advertisements

CSCI 4717/5717 Computer Architecture
1 Lecture 11: Modern Superscalar Processor Models Generic Superscalar Models, Issue Queue-based Pipeline, Multiple-Issue Design.
Dynamic ILP: Scoreboard Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
1 Lecture: Out-of-order Processors Topics: out-of-order implementations with issue queue, register renaming, and reorder buffer, timing, LSQ.
Computer Organization and Architecture (AT70.01) Comp. Sc. and Inf. Mgmt. Asian Institute of Technology Instructor: Dr. Sumanta Guha Slide Sources: Based.
COMP25212 Advanced Pipelining Out of Order Processors.
CPE432 Chapter 4C.1Dr. W. Abu-Sufah, UJ Chapter 4C: The Processor, Part C Read Section 4.10 Parallelism and Advanced Instruction-Level Parallelism Adapted.
Pipeline Hazards Pipeline hazards These are situations that inhibit that the next instruction can be processed in the next stage of the pipeline. This.
CSE431 Chapter 4C.1Irwin, PSU, 2008 CSE 431 Computer Architecture Fall 2008 Chapter 4C: The Processor, Part C Mary Jane Irwin ( )
Spring 2003CSE P5481 Reorder Buffer Implementation (Pentium Pro) Hardware data structures retirement register file (RRF) (~ IBM 360/91 physical registers)
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Oct. 14, 2002 Topic: Instruction-Level Parallelism (Multiple-Issue, Speculation)
CS 211: Computer Architecture Lecture 5 Instruction Level Parallelism and Its Dynamic Exploitation Instructor: M. Lancaster Corresponding to Hennessey.
CPE 731 Advanced Computer Architecture ILP: Part IV – Speculative Execution Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University.
February 28, 2012CS152, Spring 2012 CS 152 Computer Architecture and Engineering Lecture 11 - Out-of-Order Issue, Register Renaming, & Branch Prediction.
1 Lecture 18: Core Design Today: basics of implementing a correct ooo core: register renaming, commit, LSQ, issue queue.
1 Lecture 7: Out-of-Order Processors Today: out-of-order pipeline, memory disambiguation, basic branch prediction (Sections 3.4, 3.5, 3.7)
Chapter 14 Superscalar Processors. What is Superscalar? “Common” instructions (arithmetic, load/store, conditional branch) can be executed independently.
CS 152 Computer Architecture and Engineering Lecture 15 - Advanced Superscalars Krste Asanovic Electrical Engineering and Computer Sciences University.
Review of CS 203A Laxmi Narayan Bhuyan Lecture2.
March 9, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 12 - Advanced Out-of-Order Superscalars Krste Asanovic Electrical.
1 Lecture 8: Branch Prediction, Dynamic ILP Topics: static speculation and branch prediction (Sections )
1 Lecture 9: Dynamic ILP Topics: out-of-order processors (Sections )
Chapter 14 Instruction Level Parallelism and Superscalar Processors
Ch2. Instruction-Level Parallelism & Its Exploitation 2. Dynamic Scheduling ECE562/468 Advanced Computer Architecture Prof. Honggang Wang ECE Department.
1 Sixth Lecture: Chapter 3: CISC Processors (Tomasulo Scheduling and IBM System 360/91) Please recall:  Multicycle instructions lead to the requirement.
Instruction Issue Logic for High- Performance Interruptible Pipelined Processors Gurinder S. Sohi Professor UW-Madison Computer Architecture Group University.
1 Advanced Computer Architecture Dynamic Instruction Level Parallelism Lecture 2.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
CSCE 614 Fall Hardware-Based Speculation As more instruction-level parallelism is exploited, maintaining control dependences becomes an increasing.
© Wen-mei Hwu and S. J. Patel, 2005 ECE 412, University of Illinois Lecture Instruction Execution: Dynamic Scheduling.
1 Lecture 7: Speculative Execution and Recovery Branch prediction and speculative execution, precise interrupt, reorder buffer.
The life of an instruction in EV6 pipeline Constantinos Kourouyiannis.
OOO Pipelines - II Smruti R. Sarangi IIT Delhi 1.
LECTURE 10 Pipelining: Advanced ILP. EXCEPTIONS An exception, or interrupt, is an event other than regular transfers of control (branches, jumps, calls,
Dataflow Order Execution  Use data copying and/or hardware register renaming to eliminate WAR and WAW ­register name refers to a temporary value produced.
1 Lecture 10: Memory Dependence Detection and Speculation Memory correctness, dynamic memory disambiguation, speculative disambiguation, Alpha Example.
CSE431 L13 SS Execute & Commit.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 13: SS Backend (Execute, Writeback & Commit) Mary Jane.
CS203 – Advanced Computer Architecture ILP and Speculation.
Use of Pipelining to Achieve CPI < 1
Lecture: Out-of-order Processors
CS 352H: Computer Systems Architecture
Dynamic Scheduling Why go out of style?
/ Computer Architecture and Design
Smruti R. Sarangi IIT Delhi
PowerPC 604 Superscalar Microprocessor
CSE 431 Computer Architecture Fall 2005 Lecture 12: SS Front End (Fetch , Decode & Dispatch) Mary Jane Irwin ( )
Lecture: Out-of-order Processors
Pipelining: Advanced ILP
Lecture 6: Advanced Pipelines
Lecture 16: Core Design Today: basics of implementing a correct ooo core: register renaming, commit, LSQ, issue queue.
Superscalar Processors & VLIW Processors
Lecture 10: Out-of-order Processors
Lecture 11: Out-of-order Processors
Lecture: Out-of-order Processors
Lecture 18: Core Design Today: basics of implementing a correct ooo core: register renaming, commit, LSQ, issue queue.
Lecture 8: ILP and Speculation Contd. Chapter 2, Sections 2. 6, 2
Lecture 11: Memory Data Flow Techniques
Lecture 8: Dynamic ILP Topics: out-of-order processors
Adapted from the slides of Prof
Lecture 7: Dynamic Scheduling with Tomasulo Algorithm (Section 2.4)
Control unit extension for data hazards
Instruction Execution Cycle
Adapted from the slides of Prof
Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 9/30/2011
CSC3050 – Computer Architecture
Lecture 9: Dynamic ILP Topics: out-of-order processors
Conceptual execution on a processor which exploits ILP
Presentation transcript:

1 A Superscalar Pipeline [Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005 and Instruction Issue Logic, IEEETC, 39:3, Sohi, 1990]

2 Extracting More Performance To achieve high performance, need both machine parallelism and instruction level parallelism (ILP) by –Superpipelining –Static multiple-issue (VLIW) –Dynamic multiple-issue (superscalar) A processor’s instruction issue and completion policies impact available ILP –In-order issue with in-order completion –In-order issue with out-of-order completion Creates output dependencies (write before write) –Out-of-order issue with out-of-order completion Creates antidependency (write before read) –Out-of-order issue with out-of-order completion and in-order commit Register renaming can solve these storage dependencies

3 Speedup Measurements The speedup of the SS processor is Assumes scalar and superscalar machines have the same IC & CR # scalar cycles # superscalar cycles speedup = s n = To compute average speedup performance can use –Arithmetic mean –Harmonic mean assigns a larger weighting to the programs with the smallest speedup –EX: two programs with same scalar cycles, with a SS speedup of 2 for program1 and 25 for program2 AM = HM = AM = 1/n  s i i = 1 n HM = n / (  1/s i ) i = 1 n

4 Speedup Measurements The speedup of the SS processor is Assumes scalar and superscalar machines have the same IC & CR # scalar cycles # superscalar cycles speedup = s n = To compute average speedup performance can use –Arithmetic mean –Harmonic mean assigns a larger weighting to the programs with the smallest speedup –EX: two programs with same scalar cycles, with a SS speedup of 2 for program1 and 25 for program2 AM = HM = AM = 1/n  s i i = 1 n HM = n / (  1/s i ) i = 1 n ½ * (2 + 25) = / ( ) = 2 /.54 = 3.7

5 Maximum (Theoretical) SS Speedups The highest speedup that can be achieved with “ideal” machine parallelism (ignoring resource conflicts, storage dependencies, and procedural dependencies) –HM of 5.4 is the highest average speedup for these benchmarks that can be achieved even with ideal machine parallelism! From Johnson, 1992

6 LILI NINI FP RegFile Baseline Superscalar MIPS Processor Model Integer RegFile I$ BHT BTB RUU Decode & Dispatch Fetch IALU IMULT IALU FPALU D$ LSQ Result Bus Issue & Execute Commit Register Update Unit (managed as a queue) LILI NINI PCPC RUU_Head RUU_Tail Writeback Load/Store Queue LI (latest instance) NI (number of instances)

7 Typical Functional Unit Latencies Issue Latency Result Latency Integer ALU11 Integer multiply12 Load (on hit)11 Load (on miss)140 Store1n/a FltPt Add12 FltPt Multiply14 FltPt Divide12 FltPt Convert12 Result latency – number of cycles taken by a functional unit (FU) to produce a result Issue latency – minimum number of cycles between the issuing of an instruction to a FU and the issuing of the next instruction to the same FU

8 Additional RegFile Fields Each register in the general purpose RegFile has two associated n-bit counters (n of 3 is typical) –NI (number of instances) – the number of instances of a register as a destination register in the RUU –LI (latest instance) – the number of the latest instance When an instruction with destination register address Ri is dispatched to the RUU, both its NI and LI are incremented –Dispatch is blocked if a destination register’s NI is 2 n -1, so only up to 2 n – 1 instances of a register can be present in the RUU at any one time When an instruction is committed (updates the Ri value) the associated NI is decremented –When NI = 0 the register is “free” (there are no instruction in the RUU that are going to write to that register) and LI is cleared

9 Register Update Unit (RUU) A hardware data structure that is used to resolve data dependencies by keeping track of an instruction’s data and execution needs and that commits completed instructions in program order An entry in the RUU src operand 1src operand 2destination issued functional unit executed PC Ready Yes/No Unit Number Tag Content Address Yes/No Spec Instr Addr Tag speculative Tag = RegFile addr || LI

10 Basic Instruction Flow Overview Fetch (in program order): Fetch multiple instructions in parallel from the I$ Decode & Dispatch (in program order): –In parallel, decode the instr’s just fetched and schedule them for execution by dispatching them to the RUU –Loads and stores are dispatched as two (micro)instr’s – one to compute the effective addr and one to do the memory operation Issue & Execute (out of program order): As soon as the RUU has the instr’s source data and the FU is free, the instr’s are issued to the FU for execution Writeback (out of program order): When done the FU puts its results on the Result Bus which allows the RUU and the LSQ to be updated – the instr completes Commit (in program order): When appropriate, commit the instr’s result data to the state locations (i.e., update D$ and RegFile)

11 Managing the RUU as a Queue By managing the RUU as a queue, and committing instruction from RUU_Head, instruction are committed (aka retired) in the order they were received from the Decode & Dispatch logic (in program order) –Stores to state locations (RegFile and D$) are buffered (in the RUU and LSQ) until commit time –Supports precise interrupts (the only state locations updated are those written by instructions before the interrupting instr) The counter (LI) allows multiple instances of a specific destination register to exist in the RUU at the same time via register renaming –Solves write before write hazards if results from the RUU are returned to the RegFile in program order Managing the RUU as a queue and committing from the head of the queue provides just this!

12 Major Functions of the RUU 1.Accepts new instructions from the Decode & Dispatch logic 2.Monitors the Result Bus to resolve true dependencies and to do write back of result data to the RUU 3.Determines which instructions are ready for execution, reserves the Result Bus, and issues the instruction to the appropriate FU 4.Determines if an instruction can commit (i.e., change the machine state) and commits the instruction if appropriate Each of the tasks are done in parallel every cycle

13 The First Function of the RUU 1.Accepts new instructions from the Decode & Dispatch logic – for each instruction in the fetch packet The dispatch logic gets an entry in the RUU (a circular queue) -The RUU_Tail entry (currently empty) is allocated to the instruction and RUU_Tail is updated -Then if RUU_Head = RUU_Tail, the RUU is full and further instruction fetch stalls until the RUU_Head advances (as a result of a commit) For each source operand, if the contents of the source register is available, then it is copied to the source Content field of the RUU entry and its Ready bit is set. If not, the source RegFile addr || LI is copied to the source Tag field and the Ready bit is reset. For the destination operand, the RegFile destination addr || LI is copied to the allocated RUU destination Tag field The issued bit and executed bit are set to No, the number of the FU needed for the operation is entered, and the PC address of the instruction is copied to the PC Address field

14 Aside: Content Addressable Memories (CAMs) Memories that are addressed by their content. Typical applications include RUU source tag field comparison logic, cache tags, and translation lookaside buffers Match Field Data Field HitMatch Data Search Data –Memory hardware that compares the Search Data to the Match Field entries for each word in the CAM in parallel ! –On a match the Data Field for that entry is output to Match Data on read or Match Data is written into the Data Field on write and the Hit bit is set. –If no match occurs, the Hit bit is reset. –CAMs can be designed to accommodate multiple hits.

15 The Second Function of the RUU 2.Monitors the Result Bus to resolve true dependencies and to do write back of result data to the RUU The Result Bus destination addr || LI is compared associatively to the source Tag fields (for those source operands that are not Ready). If a match occurs, the data on the Result Bus is gated into the Content field for matching source operands. -The Result Bus contains the result data, its RUU entry address, and its RegFile destination addr || LI The result data is gated into the destination Content field of the RUU entry that matches the RUU entry addr on the Result Bus The executed bit is set to Yes Resolves true dependencies through the Ready bit (i.e., must wait for the source operands before issue) Solves anti-dependencies through LI (making sure that the source fields get updated only for the correct version of the data)

16 The Third Function of the RUU 3.Determines which instructions are ready for execution, reserves the Result Bus, and issues the ready instructions to the appropriate FU’s for execution When both source operands of an RUU entry are Ready, the RUU issues the highest priority instruction – priority is given to load and store instr’s and then to the instr’s that entered the RUU the earliest (i.e., the ones closest to RUU_Head). Reserves the Result Bus Issues the instruction (sends to the FU the source operands, RUU entry addr, and the destination’s RegFile addr || LI) and sets the issued bit to Yes Multiple instructions can be issued in parallel, if they are ready, if they can reserve the Result Bus, and if they are destined for different FU’s

17 The Fourth Function of the RUU 4.Determines if an instruction can commit (i.e., change the machine state) and commits the instruction if appropriate Monitors the executed bit of the RUU_Head entry. If the bit is set, the destination content data is written into the RegFile at the destination’s RegFile address Matches the destination RegFile addr || LI against the RUU’s source Tag fields and on match copies the destination content data into source Content fields Decrements the associated RegFile entry’s NI counter Releases the RUU entry by incrementing the RUU_Head pointer Solves output dependencies by writing to RegFile in program order Multiple instructions can commit in parallel if they are ready to commit, if they are writing to different RegFile registers, and if there are multiple RegFile write ports

18 decoded instr’s dispatched to the RUU FUs executing issued instrucions FUs output on Result Bus decode instr’s commit instr’s completed in the previous cycle if appropriate & issue newly ready instr’s to FUs for execution in the next cycle & write back result of completed instr’s to RUU Timing of RUU Activities Need to add a slide like this (from the midterm) – but the timing on the following slides is not consistent with this one, so would need to change the following slides (or change the timing on this one)

19 Baseline Superscalar MIPS Processor Model mul $1,$1,$2 addi $1,$1,1 addi $3,$2,1 add $2,$3,$1 RUU Decode & Dispatch Fetch IALU IMULT IALU D$ Result Bus Issue & Execute Commit NI LI RUU_Head RUU_Tail RegFile LSQ_Head LSQ_Tail I$ Writeback 1 & y , LSQ # LI

20 Baseline Superscalar MIPS Processor Model mul $1,$1,$2 addi $1,$1,1 addi $3,$2,1 add $2,$3,$1 Decode & Dispatch Fetch IALU IMULT IALU Result Bus Issue & Execute Commit RUU_Head RUU_Tail RegFile I$ Writeback 1 & 2 D$ RUU y ,1 6 NI LI 0 LSQ # LI LSQ_Head LSQ_Tail

21 Baseline Superscalar MIPS Processor Model mul $1,$1,$2 addi $1,$1,1 addi $3,$2,1 add $2,$3,$1 Decode & Dispatch Fetch IALU IMULT IALU Result Bus Issue & Execute Commit RUU_Head RUU_Tail RegFile I$ Writeback 1 & 2 D$ RUU y 4 y y ,1 6 NI LI 0 1 1,1 3*6 LSQ # LI LSQ_Head LSQ_Tail

22 Baseline Superscalar MIPS Processor Model mul $1,$1,$2 addi $1,$1,1 addi $3,$2,1 add $2,$3,$1 Decode & Dispatch Fetch IALU IMULT IALU Result Bus Issue & Execute Commit RUU_Head RUU_Tail RegFile I$ Writeback 1 & 2 D$ RUU 2 9 NI LI LSQ # LI LSQ_Head LSQ_Tail y 4 y y

23 Baseline Superscalar MIPS Processor Model mul $1,$1,$2 addi $1,$1,1 addi $3,$2,1 add $2,$3,$1 Decode & Dispatch Fetch IALU IMULT IALU Result Bus Issue & Execute Commit RUU_Head RUU_Tail RegFile I$ Writeback 1 & 2 LSQ D$ RUU y NI LI 1 1 1,1 3* y 5 3 3,1 6+1 # LI LSQ_Head LSQ_Tail

24 RUU with Bypass Logic Consider an instruction that is to be added to the RUU whose source operand(s) have already been computed (i.e., exist somewhere in the RUU) but which have not yet been committed to the RegFile –The source Tag field comparison logic must also monitor the RUU to RegFile bus (in addition to the Result Bus) # of RUU entries Without bypass logic With bypass logic Speed up Issue rate Speed up Issue rate With result bypass logic, can expedite the number of instr’s ready to execute by doing an associative comparison of source operand Tags with the dst Tag of each RUU entry on instr Dispatch From Sohi, 1990

25 MicroOperations of Load and Store Recall that loads and stores are dispatched to the RUU as two (micro)instr’s – one to compute the effective addr and one to do the memory operation –Load lw R1,2(R2) becomes addiR0,R2,2 lwR1,R0 –Store sw R1,6(R2) becomes addiR0,R2,6 swR1,R0 At the same time a LSQ entry is allocated –Each LSQ entry consists of a Tag field (RegFile addr || LI) and a Content field. The LI counter allows for multiple instances of stores (writes) to a memory address –When a load completes (the D$ returns the data on the Result Bus) or a store commits (in program order) the LSQ entry is released –Instruction dispatch is blocked if there is not a free LSQ entry and two free RUU entries

26 Load & Store RUU & LSQ Operation Result Bus RUU_Head RUU_Tail LSQ D$ RUU y NI LI # LI LSQ_Head LSQ_Tail Load addi R0,R2,2 lw R1,R0 Store addi R0,R2,6 sw R1,R y

27 Memory Location Data Dependencies We can have true, anti- and output dependencies on memory locations as well –Memory storage conflicts are infrequent since memory locations are not reused in the same way that registers are and since the number of true dependencies is small (loads and stores are less than 30% of the instruction mix) Can improve performance if loads are allowed to bypass previous stores (load bypassing) – but can do so only if the memory addresses of all previous stores dispatched to the RUU are known since must first check for load data dependency on previous uncommitted stores –Facilitated by having a common load and store queue Stores are not allowed to bypass previous loads, so there are no antidependencies between loads and stores Stores are committed to the D$ in program order, so there can be no output dependencies between stores

28 Loads from Memory When a load’s address becomes known, the address is compared (associatively) to see if it matches an entry already in the LSQ (i.e., if there is a pending operation to the same memory address) –If the match in the LSQ is for a load, the current load does not need to be issued (or executed) since the matching pending load will load in the data –If the match in the LSQ is for a store, the current load does not need to be issued (or executed) since the matching pending store can directly supply the destination Content for the current load If there is no match, the load is issued to the LSQ and executed when the D$ is next available When the RUU# of the load instr appears on the Result Bus (along with the memory data), the load completes by updating the RUU and releasing the LSQ entry (the RUU entry is released on load commit)

29 Load Bypassing with Forwarding Load forwarding – when a load is satisfied directly from the LSQ –The most recent matching LSQ data entry is supplied, since the LSQ may have more than one matching entry Load bypassing gives 19% speedup improvement with a 4-way decoder Load forwarding gives an additional 4% speedup improvement From Johnson, 1992

30 Stores to Memory When a store’s address (and the store data) becomes known, the address is compared (associatively) to see if it matches an entry already in the LSQ (i.e., if there is a pending operation to the same memory address) –If the match in the LSQ is for a load, the current store is issued to the LSQ –If the match in the LSQ is for a store, the current store is issued to the LSQ with an incremented LI –If there is no match, the store is dispatched to the LSQ Stores are held in the LSQ until the store is ready to commit (i.e., until its partner instr reaches the RUU_Head) at which time the store is executed (i.e., the data and address are sent to the D$) and the RUU and LSQ entries are released

31 SimpleScalar Structure sim-outorder : supports out-of-order issue and execution (with in-order commit) with a Register Update Unit (RUU) –Uses a RUU for register renaming and to hold the results of pending instructions. The RUU (aka reorder buffer (ROB)) retires completed instructions in program order to the RegFile –Uses a LSQ for store instructions not ready to commit and load instructions waiting for access to the D$ –Loads are satisfied by either the memory or by an earlier store value residing in the LSQ if their addresses match Loads are issued to the memory system only when addresses of all previous loads and stores are known

32 Simulated SimpleScalar Pipeline ruu_fetch() : fetches instr’s from one I$ line, puts them in the fetch queue, probes the cache line predictor to determine the next I$ line to access in the next cycle fetch:ifqsize : fetch width (default is 4) fetch:speed : ratio of the front end speed to the execution core ( times as many instructions fetched as decoded per cycle) fetch:mplat : branch misprediction latency (default is 3) ruu_dispatch() : decodes instr’s in the fetch queue, puts them in the dispatch (scheduler) queue, enters and links instr’s into the RUU and the LSQ, splits memory access instructions into two separate instr’s (one to compute the effective addr and one to access the memory), notes branch mispredictions decode:width : decode width (default is 4)

33 SimpleScalar Pipeline, con’t ruu_issue() and lsq_refresh() : locates and marks the instr’s ready to be issued by tracking register and memory dependencies, ready loads issued to D$ unless there are earlier stores in LSQ with unresolved addr’s, forwards store values with matching addr to ready loads issue:width : maximum issue width (default is 4) ruu:size : RUU capacity in instr’s (default is 16, min is 2) lsq:size : LSQ capacity in instr’s (default is 8, min is 2) and handles instr’s execution – collects all the ready instr’s from the scheduler queue (up to the issue width), check on FU availability, checks on access port availability, schedules writeback events based on FU latency (hardcoded in fu_config[] ) res:ialu | imult | memport | fpalu | fpmult : number of FU’s (default is 4 | 1 | 2 | 4 | 1)

34 SimpleScalar Pipeline, con’t ruu_writeback() : determines completed instr’s, does data forwarding to dependent waiting instr’s, detects branch misprediction and on misprediction rolls the machine state back to the checkpoint and discards erroneously issued instructions ruu_commit() : in-order commits results for instr’s (values copied from RUU to RegFile or LSQ to D$), RUU/LSQ entries for committed instr’s freed; keeps retiring instructions at the head of RUU that are ready to commit until the head instr is one that is not ready

35 SS Pipeline Fetch multiple instructions Decode instructions 1 st RUU function (RUU allocation, src operand copying (or Tag field set), dst Tag field set) 3 nd RUU function (when both src operands Ready and FU free, schedule Result Bus and issue instr for execution) 2 nd RUU function (copy Result Bus data to matching src’s and to RUU dst entry) 4 th RUU function (for instr at RUU_Head (if executed), write dst Contents to RegFile) FETCHDECODE & DISPATCH ISSUE & EXECUTE WRITE BACK RESULT COMMIT In Order Out of Order In Order

36 Our SS Model Performance Out of order issue has consistently the best performance for the benchmark programs However, the above ignores procedural dependencies! – the topic for L12 From Johnson, 1992