EECC722 - Shaaban #1 lec # 10 Fall 2011 10-24-2011 Data/Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP) Hydra is a 4-core.

Slides:



Advertisements
Similar presentations
In-Order Execution In-order execution does not always give the best performance on superscalar machines. The following example uses in-order execution.
Advertisements

CSCI 4717/5717 Computer Architecture
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
ILP: IntroductionCSCE430/830 Instruction-level parallelism: Introduction CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng.
Anshul Kumar, CSE IITD CSL718 : VLIW - Software Driven ILP Hardware Support for Exposing ILP at Compile Time 3rd Apr, 2006.
1 Pipelining Part 2 CS Data Hazards Data hazards occur when the pipeline changes the order of read/write accesses to operands that differs from.
Data Dependencies Describes the normal situation that the data that instructions use depend upon the data created by other instructions, or data is stored.
A KTEC Center of Excellence 1 Cooperative Caching for Chip Multiprocessors Jichuan Chang and Gurindar S. Sohi University of Wisconsin-Madison.
EECC551 - Shaaban #1 Fall 2003 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
Instruction-Level Parallelism (ILP)
Chapter 8. Pipelining. Instruction Hazards Overview Whenever the stream of instructions supplied by the instruction fetch unit is interrupted, the pipeline.
Limits on ILP. Achieving Parallelism Techniques – Scoreboarding / Tomasulo’s Algorithm – Pipelining – Speculation – Branch Prediction But how much more.
Computer Architecture 2011 – Out-Of-Order Execution 1 Computer Architecture Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
The Performance of Spin Lock Alternatives for Shared-Memory Microprocessors Thomas E. Anderson Presented by David Woodard.
A Scalable Approach to Thread-Level Speculation J. Gregory Steffan, Christopher B. Colohan, Antonia Zhai, and Todd C. Mowry Carnegie Mellon University.
EECC722 - Shaaban #1 lec # 10 Fall Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP) A Chip Multiprocessor.
Stanford University The Stanford Hydra Chip Multiprocessor Kunle Olukotun The Hydra Team Computer Systems Laboratory Stanford University.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Dec 5, 2005 Topic: Intro to Multiprocessors and Thread-Level Parallelism.
EECC722 - Shaaban #1 lec # 10 Fall Data/Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP) Hydra ia a 4-core.
EECC722 - Shaaban #1 lec # 10 Fall A New Approach to Speculation in the Stanford Hydra Chip Multiprocessor (CMP) A Chip Multiprocessor.
EECC551 - Shaaban #1 Winter 2002 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
EECC551 - Shaaban #1 Spring 2006 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
1: Operating Systems Overview
EECC551 - Shaaban #1 Fall 2005 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
1 Lecture 5: Pipeline Wrap-up, Static ILP Basics Topics: loop unrolling, VLIW (Sections 2.1 – 2.2) Assignment 1 due at the start of class on Thursday.
Multiscalar processors
State Machines Timing Computer Bus Computer Performance Instruction Set Architectures RISC / CISC Machines.
1 New Architectures Need New Languages A triumph of optimism over experience! Ian Watson 3 rd July 2009.
EECC722 - Shaaban #1 lec # 10 Fall Data/Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP) A 4-core Chip Multiprocessor.
EECC551 - Shaaban #1 Spring 2004 lec# Definition of basic instruction blocks Increasing Instruction-Level Parallelism & Size of Basic Blocks.
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Cache Memories Effectiveness of cache is based on a property of computer programs called locality of reference Most of programs time is spent in loops.
Multiprocessor Cache Coherency
Introduction to Symmetric Multiprocessors Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
LOGO OPERATING SYSTEM Dalia AL-Dabbagh
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
Multi-core architectures. Single-core computer Single-core CPU chip.
Multi-Core Architectures
ECE200 – Computer Organization Chapter 9 – Multiprocessors.
CASH: REVISITING HARDWARE SHARING IN SINGLE-CHIP PARALLEL PROCESSOR
Chapter 8 CPU and Memory: Design, Implementation, and Enhancement The Architecture of Computer Hardware and Systems Software: An Information Technology.
Transactional Coherence and Consistency Presenters: Muhammad Mohsin Butt. (g ) Coe-502 paper presentation 2.
Precomputation- based Prefetching By James Schatz and Bashar Gharaibeh.
Superscalar - summary Superscalar machines have multiple functional units (FUs) eg 2 x integer ALU, 1 x FPU, 1 x branch, 1 x load/store Requires complex.
MULTIPLEX: UNIFYING CONVENTIONAL AND SPECULATIVE THREAD-LEVEL PARALLELISM ON A CHIP MULTIPROCESSOR Presented by: Ashok Venkatesan Chong-Liang Ooi, Seon.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
The Standford Hydra CMP  Lance Hammond  Benedict A. Hubbert  Michael Siu  Manohar K. Prabhu  Michael Chen  Kunle Olukotun Presented by Jason Davis.
COMPSYS 304 Computer Architecture Speculation & Branching Morning visitors - Paradise Bay, Bay of Islands.
Computer Structure 2015 – Intel ® Core TM μArch 1 Computer Structure Multi-Threading Lihu Rappoport and Adi Yoaz.
Transactional Memory Coherence and Consistency Lance Hammond, Vicky Wong, Mike Chen, Brian D. Carlstrom, John D. Davis, Ben Hertzberg, Manohar K. Prabhu,
Out-of-order execution Lihu Rappoport 11/ MAMAS – Computer Architecture Out-Of-Order Execution Dr. Lihu Rappoport.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
Multi Processing prepared and instructed by Shmuel Wimer Eng. Faculty, Bar-Ilan University June 2016Multi Processing1.
1 Programming with Shared Memory - 3 Recognizing parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Jan 22, 2016.
COMP 740: Computer Architecture and Implementation
Simultaneous Multithreading
The University of Adelaide, School of Computer Science
Multiprocessor Cache Coherency
CS203 – Advanced Computer Architecture
/ Computer Architecture and Design
Pipelining: Advanced ILP
Data/Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP) Hydra is a 4-core Chip Multiprocessor (CMP) based micro-architecture/compiler.
Lecture 8: ILP and Speculation Contd. Chapter 2, Sections 2. 6, 2
Hardware Multithreading
How to improve (decrease) CPI
Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 9/30/2011
/ Computer Architecture and Design
The University of Adelaide, School of Computer Science
Programming with Shared Memory Specifying parallelism
The University of Adelaide, School of Computer Science
Presentation transcript:

EECC722 - Shaaban #1 lec # 10 Fall Data/Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP) Hydra is a 4-core Chip Multiprocessor (CMP) based micro- architecture/compiler effort at Stanford that provides hardware/software support for Data/Thread Level Speculation (TLS) to extract parallel speculated threads from sequential code (single thread) augmented with software thread speculation handlers. Primary Hydra papers: 4, 6 Stanford Hydra, discussed here, is one TLS architecture example. Other TLS Architectures include: - Wisconsin Multiscalar - Carnegie-Mellon Stampede - MIT M-machine Goal of TLS Architectures: Increase the range of parallelizable applications/computations.

EECC722 - Shaaban #2 lec # 10 Fall Motivation for Chip Multiprocessors (CMPs) Chip Multiprocessors (CMPs) offers implementation benefits: –High-speed signals are localized in individual CPUs –A proven CPU design is replicated across the die (including SMT processors, e.g IBM Power 5) Overcomes diminishing performance/transistor return problem (limited-ILP) in single-threaded superscalar processors (similar motivation for SMT) –Transistors are used today mostly for ILP extraction –CMPs use transistors to run multiple threads (exploit thread level parallelism, TLP): On parallelized (multi-threaded) programs With multi-programmed workloads (multi-tasking) –A number of single-threaded applications executing on different CPUs Fast inter-processor communication eases parallelization of code (Using shared L2 cache) Potential Drawback of CMPs: –High power/heat issues using current VLSI processes due to core duplication. –Limited ILP/poor latency hiding within individual cores (SMT addresses this) But slower than logical processor communication in SMT

EECC722 - Shaaban #3 lec # 10 Fall Stanford Hydra CMP Approach Goals Exploit all levels of program parallelism. Develop a single-chip multiprocessor architecture that simplifies microprocessor design and achieves high performance. Make the multiprocessor transparent to the average user. Integrate use of parallelizing compiler technology in the design of microarchitecture that supports data/thread level speculation (TLS). Within a single CPU core On multiple CPU cores within a single CMP or multiple CMPs On multiple CPU cores within a single CMP using Thread Level Speculation (TLS) Coarse Grain Fine Grain How?

EECC722 - Shaaban #4 lec # 10 Fall Hydra Prototype Overview 4 CPU cores with modified private L1 caches. Speculative coprocessor (for each processor core) –Speculative memory reference controller. –Speculative interrupt screening mechanism. –Statistics mechanisms for performance evaluation and to allow feedback for code tuning. Memory system –Read and write buses. –Controllers for all resources. –On-chip shared L2 cache. –L2 Speculation write buffers. –Simple off-chip main memory controller. –I/O and debugging interface. To support Thread-Level Speculation (TLS )

EECC722 - Shaaban #5 lec # 10 Fall The Basic Hydra CMP 4 processor cores and shared secondary (L2) cache on a chip 2 buses connect processors and memory Cache Coherence: writes are broadcast on write bus L2 Shared L2

EECC722 - Shaaban #6 lec # 10 Fall Hydra Memory Hierarchy Characteristics L1 is Write though To L2 (not to main memory)

EECC722 - Shaaban #7 lec # 10 Fall Hydra Prototype Layout 250 MHz clock rate target Shared L2 Speculation Write Buffers (one per core) Circa ~ 1999 D-L1 I-L1 Private Split L1 caches Per core Main Memory Controller

EECC722 - Shaaban #8 lec # 10 Fall CMP Parallel Performance Varying levels of performance: 1.Multiprogrammed workloads work well 2.Very parallel apps (matrix-based FP and multimedia) are excellent 3.Acceptable only with a few less parallel (i.e. integer) general applications Results given here are without Thread Level Speculation (TLS) Thread Level Speculation (TLS) Target Applications High Data Parallelism/LLP 321 Normally hard to parallelize (multi-thread)

EECC722 - Shaaban #9 lec # 10 Fall The Parallelization Problem Current automated parallelization software (parallel compilers) is limited –Parallel compilers are generally successful for scientific applications with statically known dependencies (e.g dense matrix computations). –Automated parallization of general-purpose applications provides poor parallel performance especially for integer applications due to ambiguous data dependencies resulting from: Significant pointer use: Pointer aliasing (Pointer disambiguation problem) Dynamic loop limits Complex control flow Irregular array accesses Inter-procedural dependencies –Ambiguous data dependencies limit extracted parallelism/performance: Complicate static dependency analysis Introduce imprecision into dependence relations Force conservative performance-degrading synchronization to safely handle potential dependencies. Parallelism may exist in algorithm, but code hides it. Manual parallelization can provide good performance on a much wider range of applications: –Requires different initial program design/data structures/algorithms –Programmers with additional skills. – Handling ambiguous dependencies present in general-purpose applications may still force conservative synchronization greatly limiting parallel performance Can hardware help the situation? High Data Parallelism/LLP Hardware Supported Thread Level Speculation Causes Of Ambiguous Dependencies Outcome

EECC722 - Shaaban #10 lec # 10 Fall Data Speculation & Thread Level Speculation (TLS) Data speculation and Thread Level Speculation (TLS) enable parallelization without regard for data dependencies: –Normal sequential program is broken up into parallel speculative threads. –Speculative threads are now run in parallel on multiple physical CPUs (e.g. CMP) and/or logical CPUs (e.g. SMT). Thus the approach assumes/speculates that no data dependencies exist among created threads and thus speculative threads are run in parallel. –Speculation hardware (TLS processor) architecture ensures correctness (no name/data dependence violations among speculative threads). Parallel software implications –Loop parallelization is now easily automated. –Ambiguous dependencies resolved dynamically without conservative synchronization. –More “arbitrary” threads are possible (subroutines). –Add synchronization only for performance. Thread Level Speculation (TLS) hardware support mechanisms –Speculative thread control mechanism –Five fundamental speculation hardware/memory system requirements for correct data/thread speculation. e.g Speculative thread creation, restart, termination.. Given later in slide 21 Possible Limited Parallel Software Solution: We assume no dependencies and TLS hardware ensures no violations if dependencies actually exist Multiple speculated threads

EECC722 - Shaaban #11 lec # 10 Fall Subroutine Thread Speculation Speculated Thread (subroutine return code) Speculated threads communicate results through shared memory locations Speculate

EECC722 - Shaaban #12 lec # 10 Fall Loop Iteration Speculative Threads A Simple example of a speculatively executed loop using Data/Thread Level Speculation (TLS) Original Sequential (Single Thread) Loop Speculated Threads Most common Application of TLS Shown here one iteration per speculated thread Speculated threads communicate results through shared memory locations More Speculative Threads

EECC722 - Shaaban #13 lec # 10 Fall Speculative Thread Creation in Hydra Register Passing Buffer (RPB) More Speculative Threads

EECC722 - Shaaban #14 lec # 10 Fall Overview of Loop-Iteration Thread Speculation Parallel regions (loop iterations) are annotated by the compiler. –e.g. Begin_Speculation … End_Speculation The hardware uses these annotations to run loop iterations in parallel as speculated threads on a number of CPU cores. Each CPU core knows which loop iteration it is running. CPUs dynamically prevent data/name dependency violations: –“later” iterations can’t use data before write by “earlier” iterations (Prevent data dependency violation, RAW hazard). –“earlier” iterations never see writes by “later” iterations (WAR hazards prevented): Multiple views of memory are created by TLS hardware If a “later” iteration (more speculated thread) has used data that an “earlier” iteration (less speculated thread) writes before data is actually written (data dependency violation, RAW hazard must be detected by TLS hardware), the later iteration is restarted. –All following iterations are halted and restarted, also. –All writes by the later iteration are discarded (undo speculated work). Memory Renaming A later iteration is assigned a more speculated thread Speculated threads communicate results through shared memory locations How? i.e. assume no data dependencies Detect dependency violation and restart computation RAW Violation Detection

EECC722 - Shaaban #15 lec # 10 Fall Hydra’s Data & Thread Speculation Operations Speculated Threads must commit results in- order (when no longer Speculative) Once a RAW hazard is detected by hardware i.e. Computation no longer speculative

EECC722 - Shaaban #16 lec # 10 Fall Hydra Loop Compiling for Speculation Create Speculated Threads

EECC722 - Shaaban #17 lec # 10 Fall Loop Execution with Thread Speculation Data Dependency Violation (RAW hazard) Data Dependency Violation (RAW hazard) Handling Example If a “later” iteration (more speculated thread) has used data that an “earlier” iteration (less speculated thread) writes before data is actually written (data dependency violation, RAW hazard), the later iteration is restarted –All following iterations are halted and restarted, also. –All writes by the later iteration are discarded (undo speculated work). Value read too early Earlier (less speculative) thread Later (more speculative) thread Speculated threads communicate results through shared memory locations

EECC722 - Shaaban #18 lec # 10 Fall Data Hazard/Dependence Classification I (Write) Shared Name J (Read) Read after Write (RAW) if data dependence is violated I (Read) Shared Name J (Write) Write after Read (WAR) if antidependence is violated I (Read) Shared Name J (Read) Read after Read (RAR) not a hazard I (Write) Shared Name J (Write) Write after Write (WAW) if output dependence is violated A name dependence: output dependence A name dependence: antidependence I.. J Program Order No dependence True Data Dependence Name: Register or Memory Location e.g. S.D. F4, 0(R1) e.g L.D. F6, 0(R1) e.g. S.D. F4, 0(R1) e.g. S.D. F6, 0(R1) e.g. L.D. F6, 0(R1) e.g. S.D. F4, 0(R1) e.g. L.D. F6, 0(R1) e.g. L.D. F4, 0(R1) Here, speculated threads communicate results through shared memory locations

EECC722 - Shaaban #19 lec # 10 Fall Speculative Data Access in Speculated Threads i Less Speculated thread i+1 More speculated thread WAR RAW WAW i i+1 Write by i+1 Not seen by i Access in correct program order to same memory location i before i+1 Reversed access order to same memory location i+1 before i Speculated threads communicate results through shared memory locations Data Dep. violation (detect and Restart) Program Order More Speculative

EECC722 - Shaaban #20 lec # 10 Fall To provide the desired (correct) memory behavior for speculative data access, the data/thread speculation hardware must provide: 1. A method for detecting true memory data dependencies, in order to determine when a dependency has been violated (RAW hazard). 2. A method for restarting (backing up and re-executing) speculative loads and any instructions that may be dependent upon them when the load causes a violation. 3. A method for buffering any data written during a speculative region of a program (speculative results) so that it may be discarded when a violation occurs or permanently committed at the right time in correct order. Speculative Data Access in Speculated Threads i.e RAW hazard i.e when thread is no longer speculative (and in correct order to prevent WAW hazards)

EECC722 - Shaaban #21 lec # 10 Fall Five Fundamental Speculation Hardware Requirements For Correct Data/Thread Speculation (TLS) 1. Forward data between parallel threads (Prevent RAW). A speculative system must be able to forward shared data quickly and efficiently from an earlier thread running on one processor to a later thread running on another. 2. Detect when reads occur too early (RAW hazard occurred ). If a data value is read by a later thread and subsequently written by an earlier thread, the hardware must notice that the read retrieved incorrect data since a true dependence violation (RAW hazard) has occurred. 3. Safely discard speculative state after violations (RAW hazards). All speculative changes to the machine state must be discarded after a violation, while no permanent machine state may be lost in the process. 4. Retire speculative writes in the correct order (Prevent WAW hazards). Once speculative threads have completed successfully (no longer speculative), their state must be added (committed) to the permanent state of the machine in the correct program order, considering the original sequencing of the threads. 5. Provide memory renaming (Prevent WAR hazards). The speculative hardware must ensure that the older thread cannot “see” any changes made by later threads, as these would not have occurred yet (i.e. future computation) in the original sequential program. (i.g. Multiple views of memory) i.e. More Speculative

EECC722 - Shaaban #22 lec # 10 Fall Speculative Hardware/Memory Requirements 1-2 (prevent RAW) (RAW hazard or violation) 1 2 More Speculated Thread i +1 Read is too early Less Speculated Thread i

EECC722 - Shaaban #23 lec # 10 Fall Speculative Hardware/Memory Requirements 3-4 (RAW hazard). (prevent WAW hazards) Restart 3 4 More Speculated Thread RAW Hazard Occurred/Detected Commit speculative writes in correct program order Commit Write order Discard

EECC722 - Shaaban #24 lec # 10 Fall Speculative Hardware/Memory Requirement 5 Memory Renaming to prevent WAR hazards. Write X by i+1 not visible to less speculative threads (thread i here) (i.e. no WAR hazard) Less speculated Thread i More Speculated Thread i + 1 Even more Speculated Thread i + 2 Not visible to less speculated thread i 5 Memory Renaming But visible to more speculative thread

EECC722 - Shaaban #25 lec # 10 Fall Hydra Thread Level Speculation (TLS) Hardware Speculation Coprocessor L2 Cache Speculation Write Buffers (one per core) Needed to hold speculative data/state Data L1 Modified Data L1 Cache Flags

EECC722 - Shaaban #26 lec # 10 Fall Hydra Thread Level Speculation (TLS) Support Multiple Memory views or Memory Renaming How the five fundamental TLS hardware requirements are met: (summary) i.e. restart How TLS Hardware Requirements Are Met (Summary)

EECC722 - Shaaban #27 lec # 10 Fall Data L1 Cache Tag Details - Record writes of more speculated threads L1 Cache Modifications To Support Speculation:

EECC722 - Shaaban #28 lec # 10 Fall L2 Speculation Write Buffer Details L2 speculation write buffers committed in L2 (which holds permanent non- speculative state ) in correct program order (when no longer speculative) To prevent WAW hazards (basic requirement 5) i.e speculative state Speculative loads are shown next i.e. Stores i.e. Loads

EECC722 - Shaaban #29 lec # 10 Fall The Operation of Speculative Loads Check First On L1 miss Check L2 Last On local Data L1 Miss: Do Not Check: More Speculated Later writes not visible (otherwise WAR) Less Speculative More Speculative L1 On local Data L1 Miss: First, check own and then less speculated (earlier) Write buffers then L2 cache This operation of speculative loads provides multiple memory views (memory renaming) where more speculative writes are not visible to less speculative threads which prevents WAR hazards (memory renaming, Requirement 5) and satisfies data dependencies (forward data, Requirement 1) Data L1 Hit To meet requirement 5: Multiple Memory Views (Memory Renaming) Data L1 Miss To meet requirement 1: Forward results (Prevent RAW)

EECC722 - Shaaban #30 lec # 10 Fall Reading L2 Cache Speculative Buffers Similar to last slide On local Data L1 Miss: First, check own and then less speculated (earlier) Write buffers then L2 cache On local Data L1 Miss: Do Not Check: More Speculated Later writes not visible (otherwise WAR) Speculative Load Operation: To meet requirement 1: Forward results (Prevent RAW) To meet requirement 5: Multiple Memory Views (Memory Renaming)

EECC722 - Shaaban #31 lec # 10 Fall The Operation of Speculative Stores Less Speculated More Speculated Similar to invalidate cache coherency protocols RAW Detection (Req. 2) L2 speculation write buffers committed in L2 (which holds permanent non- speculative state ) in correct program order (when no longer speculative) (This satisfies fundamental TLS requirement 4 to prevent WAW) Write to L1 and own L2 speculation write buffer (This satisfies basic speculative hardware/Memory requirements 2-3) i.e Data L1 Cache Write Hit Detect RAW violations and restart

EECC722 - Shaaban #32 lec # 10 Fall Hydra’s Handling of Five Basic Speculation Hardware Requirements For Correct Data/Thread Speculation 1. Forward data between parallel threads (RAW). –When a speculative thread writes data over the write bus, all more-speculative threads that may need the data have their current copy of that cache line invalidated. –This is similar to the way the system works during non- speculative operation (invalidate cache coherency protocol). –If any of the threads subsequently need the new speculative data forwarded to them, they will miss in their primary cache and access the secondary cache. The speculative data contained in the write buffers of the current or older threads replaces data returned from the secondary cache on a byte-by-byte basis just before the composite line is returned to the processor and primary cache. Speculative Load As seen earlier in slides And own L2 write buffer and less speculated L2 buffers In primary cache (Data L1 cache)

EECC722 - Shaaban #33 lec # 10 Fall Detect when reads occur too early (Detect RAW hazards). –Primary cache (Data L1) read bits are set to mark any reads that may cause violations. –Subsequently, if a write to that address from an earlier thread (less speculated) invalidates the address, a violation is detected, and the thread is restarted. 3. Safely discard speculative state after violations. –Since all permanent machine state in Hydra is always maintained within the secondary cache, anything in the primary caches and secondary cache speculation buffers may be invalidated at any time without risking a loss of permanent state. As a result, any lines in the primary cache containing speculative data (marked with a special modified bit) may simply be invalidated all at once to clear any speculative state from a primary cache. In parallel with this operation, the secondary cache buffer for the thread may be emptied to discard any speculative data written by the thread. Hydra’s Handling of Five Basic Speculation Hardware Requirements For Correct Data/Thread Speculation RAW hazards Discard speculative state in Data L1 and L2 speculation buffers

EECC722 - Shaaban #34 lec # 10 Fall Retire speculative writes in the correct order (Prevent WAW hazards). –Separate secondary cache speculation buffers are maintained for each thread. As long as these are drained (committed) into the secondary (L2) cache in the original program sequence of the threads, they will reorder speculative memory references correctly. 5. Provide memory renaming (Prevent WAR hazards). –Each processor can only read data written by itself or earlier threads (less speculated threads) when reading its own primary cache or the secondary cache speculation buffers. –Writes from later (more speculative) threads don’t cause immediate invalidations in the primary cache, since these writes should not be visible to earlier (less speculative) threads. –However, these “ignored” invalidations are recorded using an additional pre- invalidate primary cache bit associated with each line. This is because they must be processed before a different speculative or non-speculative thread executes on this processor. –If future threads have written to a particular line in the primary cache, the pre- invalidate bit for that line is set. When the current thread completes, these bits allow the processor to quickly simulate the effect of all stored invalidations caused by all writes from later processors all at once, before a new (even more speculative) thread begins execution on this processor. Hydra’s Handling of Five Basic Speculation Hardware Requirements For Correct Data/Thread Speculation Multiple Memory Views As seen earlier in speculative load operation When threads/work no longer speculative Why pre-invalidate i.e. generated by more speculative threads

EECC722 - Shaaban #35 lec # 10 Fall Thread Speculation Performance Results representative of entire uniprocessor applications Simulated with accurate modeling of Hydra’s memory and hardware speculation support. (No TLS)

EECC722 - Shaaban #36 lec # 10 Fall Hydra Conclusions Hydra offers a number of advantages: –Good performance on parallel applications. –Promising performance on difficult to parallelize sequential (single-threaded) applications using data/Thread Level Speculation (TLS) mechanisms. –Scalable, modular design. –Low hardware overhead support for speculative thread parallelism (compared to other TLS architectures), yet greatly increases the number of parallelizable applications. Main goal of TLS

EECC722 - Shaaban #37 lec # 10 Fall Other Thread Level Speculation (TLS) Efforts: Wisconsin Multiscalar (1995) This CMP-based design proposed the first reasonable hardware to implement TLS. Unlike Hydra, Multiscalar implements a ring-like network between all of the processors to allow direct register-to-register communication. –Along with hardware-based thread sequencing, this type of communication allows much smaller threads to be exploited at the expense of more complex processor cores. The designers proposed two different speculative memory systems to support the Multiscalar core. –The first was a unified primary cache, or address resolution buffer (ARB). Unfortunately, the ARB has most of the complexity of Hydra’s secondary cache buffers at the primary cache level, making it difficult to implement. –Later, they proposed the speculative versioning cache (SVC). The SVC uses write-back primary caches to buffer speculative writes in the primary caches, using a sophisticated coherence scheme.

EECC722 - Shaaban #38 lec # 10 Fall This CMP-with-TLS proposal is very similar to Hydra, –Including the use of software speculation handlers. However, the hardware is simpler than Hydra’s. The design uses write-back primary caches to buffer writes— similar to those in the SVC—and sophisticated compiler technology to explicitly mark all memory references that require forwarding to another speculative thread. Their simplified SVC must drain its speculative contents as each thread completes, unfortunately resulting in heavy bursts of bus activity. Other Thread Level Speculation (TLS) Efforts: Carnegie-Mellon Stampede

EECC722 - Shaaban #39 lec # 10 Fall This CMP design has three processors that share a primary cache and can communicate register-to-register through a crossbar. Each processor can also switch dynamically among several threads. (TLS & SMT??) As a result, the hardware connecting processors together is quite complex and slow. However, programs executed on the M-machine can be parallelized using very fine-grain mechanisms that are impossible on an architecture that shares outside of the processor cores, like Hydra. Performance results show that on typical applications extremely fine-grained parallelization is often not as effective as parallelism at the levels that Hydra can exploit. The overhead incurred by frequent synchronizations reduces the effectiveness. Other Thread Level Speculation (TLS) Efforts: MIT M-machine Fine grain multi-threaded, not SMT