Multithreaded Processors

Slides:



Advertisements
Similar presentations
Topics Left Superscalar machines IA64 / EPIC architecture
Advertisements

CS136, Advanced Architecture Limits to ILP Simultaneous Multithreading.
Multithreading Processors and Static Optimization Review Adapted from Bhuyan, Patterson, Eggers, probably others.
Computer Structure 2014 – Out-Of-Order Execution 1 Computer Structure Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
1 Advanced Computer Architecture Limits to ILP Lecture 3.
Multithreading processors Adapted from Bhuyan, Patterson, Eggers, probably others.
Microprocessor Microarchitecture Multithreading Lynn Choi School of Electrical Engineering.
CS 7810 Lecture 16 Simultaneous Multithreading: Maximizing On-Chip Parallelism D.M. Tullsen, S.J. Eggers, H.M. Levy Proceedings of ISCA-22 June 1995.
Computer Architecture 2011 – Out-Of-Order Execution 1 Computer Architecture Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
CS 252 Graduate Computer Architecture Lecture 13: Multithreading Krste Asanovic Electrical Engineering and Computer Sciences University of California,
CS 203A Computer Architecture Lecture 10: Multimedia and Multithreading Instructor: L.N. Bhuyan.
April 6, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 18: Multithreading Krste Asanovic Electrical Engineering and Computer.
1 Lecture 11: ILP Innovations and SMT Today: out-of-order example, ILP innovations, SMT (Sections 3.5 and supplementary notes)
CS 152 Computer Architecture and Engineering Lecture 18: Multithreading Krste Asanovic Electrical Engineering and Computer Sciences University of California,
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
CS 162 Computer Architecture Lecture 10: Multithreading Instructor: L.N. Bhuyan Adopted from Internet.
Multithreading and Dataflow Architectures CPSC 321 Andreas Klappenecker.
1 Lecture 9: More ILP Today: limits of ILP, case studies, boosting ILP (Sections )
L18 – Pipeline Issues 1 Comp 411 – Spring /03/08 CPU Pipelining Issues Finishing up Chapter 6 This pipe stuff makes my head hurt! What have you.
Review of CS 203A Laxmi Narayan Bhuyan Lecture2.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Multithreading I Steve Ko Computer Sciences and Engineering University at Buffalo.
L17 – Pipeline Issues 1 Comp 411 – Fall /1308 CPU Pipelining Issues Finishing up Chapter 6 This pipe stuff makes my head hurt! What have you been.
EECS 470 Superscalar Architectures and the Pentium 4 Lecture 12.
1 Lecture 18: Pipelining Today’s topics:  Hazards and instruction scheduling  Branch prediction  Out-of-order execution Reminder:  Assignment 7 will.
1 Lecture 10: ILP Innovations Today: ILP innovations and SMT (Section 3.5)
March 16, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 14: Multithreading Krste Asanovic Electrical Engineering and Computer.
Computer Architecture 2010 – Out-Of-Order Execution 1 Computer Architecture Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
Lecture 11 Multithreaded Architectures Graduate Computer Architecture Fall 2005 Shih-Hao Hung Dept. of Computer Science and Information Engineering National.
POLITECNICO DI MILANO Parallelism in wonderland: are you ready to see how deep the rabbit hole goes? Multithreaded and multicore processors Marco D. Santambrogio:
© Krste Asanovic, 2015CS252, Spring 2015, Lecture 13 CS252 Graduate Computer Architecture Fall 2015 Lecture 13: Multithreading Krste Asanovic
Spring 2003CSE P5481 VLIW Processors VLIW (“very long instruction word”) processors instructions are scheduled by the compiler a fixed number of operations.
Thread Level Parallelism Since ILP has inherent limitations, can we exploit multithreading? –a thread is defined as a separate process with its own instructions.
Processor Level Parallelism. Improving the Pipeline Pipelined processor – Ideal speedup = num stages – Branches / conflicts mean limited returns after.
Chapter 6 Pipelined CPU Design. Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelined operation – laundry analogy Text Fig. 6.1.
On-chip Parallelism Alvin R. Lebeck CPS 221 Week 13, Lecture 2.
Computer Architecture: Multithreading (I) Prof. Onur Mutlu Carnegie Mellon University.
1  1998 Morgan Kaufmann Publishers Chapter Six. 2  1998 Morgan Kaufmann Publishers Pipelining Improve perfomance by increasing instruction throughput.
CENG331 Computer Organization Multithreading
Advanced Computer Architecture pg 1 Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8) Henk Corporaal
L17 – Pipeline Issues 1 Comp 411 – Fall /23/09 CPU Pipelining Issues Read Chapter This pipe stuff makes my head hurt! What have you been.
On-chip Parallelism Alvin R. Lebeck CPS 220/ECE 252.
Advanced Pipelining 7.1 – 7.5. Peer Instruction Lecture Materials for Computer Architecture by Dr. Leo Porter is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike.
Pentium 4 Deeply pipelined processor supporting multiple issue with speculation and multi-threading 2004 version: 31 clock cycles from fetch to retire,
Chapter Six.
CS 352H: Computer Systems Architecture
Dynamic Scheduling Why go out of style?
CS203 – Advanced Computer Architecture
Electrical and Computer Engineering
Mehmet Altan Açıkgöz Ercan Saraç
Krste Asanovic Electrical Engineering and Computer Sciences
Prof. Onur Mutlu Carnegie Mellon University
Simultaneous Multithreading
Simultaneous Multithreading
Computer Structure Multi-Threading
Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8)
/ Computer Architecture and Design
Hyperthreading Technology
Lecture: SMT, Cache Hierarchies
Computer Architecture: Multithreading (I)
Krste Asanovic Electrical Engineering and Computer Sciences
Levels of Parallelism within a Single Processor
Lecture 8: ILP and Speculation Contd. Chapter 2, Sections 2. 6, 2
Hardware Multithreading
Chapter Six.
* From AMD 1996 Publication #18522 Revision E
/ Computer Architecture and Design
Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8)
Levels of Parallelism within a Single Processor
CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture Lecture 14 – Multithreading Krste Asanovic Speaker: David Biancolin.
Presentation transcript:

Multithreaded Processors

Pipeline Hazards LW r1, 0(r2) LW r5, 12(r1) ADDI r5, r5, #12 SW 12(r1), r5 • Each instruction may depend on the next – Without bypassing, need interlocks LW r1, 0(r2) LW r5, 12(r1) ADDI r5, r5, #12 SW 12(r1), r5 • Bypassing cannot completely eliminate interlocks or delay slots

Multithreading How can we guarantee no dependencies between instructions in a pipeline? One way is to interleave execution of instructions from different program threads on same pipeline Interleave 4 threads, T1-T4, on non-bypassed 5-stage pipe T1: LW r1, 0(r2) T2: ADD r7, r1, r4 T3: XORI r5, r4, #12 T4: SW 0(r7), r5 T1: LW r5, 12(r1) Last instruction in a thread always completes writeback before next instruction in same thread reads regfile

CDC 6600 Peripheral Processors (Cray, 1965) First multithreaded hardware 10 “virtual” I/O processors fixed interleave on simple pipeline pipeline has 100ns cycle time each processor executes one instruction every 1000ns accumulator-based instruction set to reduce processor state

Simple Multithreaded Pipeline Thread select Have to carry thread select down pipeline to ensure correct state bits read/written at each pipe stage

Multithreading Costs Appears to software (including OS) as multiple slower CPUs Each thread requires its own user state GPRs PC Also, needs own OS control state virtual memory page table base register exception handling registers Other costs?

Thread Scheduling Policies Fixed interleave (CDC 6600 PPUs, 1965) each of N threads executes one instruction every N cycles if thread not ready to go in its slot, insert pipeline bubble Software-controlled interleave (TI ASC PPUs, 1971) OS allocates S pipeline slots amongst N threads hardware performs fixed interleave over S slots, executing whichever thread is in that slot Hardware-controlled thread scheduling (HEP, 1982) hardware keeps track of which threads are ready to go picks next thread to execute based on hardware priority scheme Software-controlled interleave, S > N. Wheel with the threads in each slot. If thread is ready to go It goes, else NOP, I.e., pipeline bubble.

What “Grain” Multithreading? So far assumed fine-grained multithreading CPU switches every cycle to a different thread When does this make sense? Coarse-grained multithreading CPU switches every few cycles to a different thread

Multithreading Design Choices L1 Inst. Cache Memory CPU Unified L2 Cache Memory Memory RF L1 Data Cache Memory Context switch to another thread every cycle, or on hazard or L1 miss or L2 miss or network request Per-thread state and context-switch overhead Interactions between threads in memory hierarchy Managing interactions between threads. Page

Denelcor HEP (Burton Smith, 1982) First commercial machine to use hardware threading in main CPU 120 threads per processor 10 MHz clock rate Up to 8 processors precursor to Tera MTA (Multithreaded Architecture)

Tera MTA Overview Up to 256 processors Up to 128 active threads per processor Processors and memory modules populate a sparse 3D torus interconnection fabric Flat, shared main memory No data cache Sustains one main memory access per cycle per processor 50W/processor @ 260MHz SGI bought Cray, and Tera was a spin-off. 1997. Integer sort press release.

MTA Instruction Format Three operations packed into 64-bit instruction word (short VLIW) One memory operation, one arithmetic operation,plus one arithmetic or branch operation Memory operations incur ~150 cycles of latency Explicit 3-bit “lookahead” field in instruction gives number of subsequent instructions (0-7) that areindependent of this one c.f. Instruction grouping in VLIW allows fewer threads to fill machine pipeline used for variable-sized branch delay slots Thread creation and termination instructions

MTA Multithreading Each processor supports 128 active hardware threads 128 SSWs, 1024 target registers, 4096 general-purposeregisters Every cycle, one instruction from one active thread is launched into pipeline Instruction pipeline is 21 cycles long At best, a single thread can issue one instruction every 21 cycles Clock rate is 260MHz, effective single thread issue rate is260/21 = 12.4MHz 32 64-bit general-purpose registers (R0-R31) unified integer/floating-point register set R0 hard-wired to zero 8 64-bit branch target registers (T0-T7) load branch target address before branch instruction T0 contains address of user exception handler 1 64-bit stream status word (SSW) includes 32-bit program counter four condition code registers floating-point rounding mode

MTA Pipeline Memory unit is busy, or sync operation failed retry. Issue Pool Inst Fetch Write Pool Memory Pool Retry Pool Interconnection Network Memory pipeline Memory unit is busy, or sync operation failed retry. Just goes around the memory pipeline.

Coarse-Grain Multithreading Tera MTA designed for supercomputing applications with large data sets and low locality No data cache Many parallel threads needed to hide large memory latency Other applications are more cache friendly Few pipeline bubbles when cache getting hits Just add a few threads to hide occasional cache miss latencies Swap threads on cache misses Tera not very successful, 2 machines sold. Changed their name back to Cray!

MIT Alewife Modified SPARC chips Up to four threads per node register windows hold different thread contexts Up to four threads per node Thread switch on local cache miss

IBM PowerPC RS64-III (Pulsar) Commercial coarse-grain multithreading CPU Based on PowerPC with quad-issue in-order five-stage pipeline Each physical CPU supports two virtual CPUs On L2 cache miss, pipeline is flushed and execution switches to second thread short pipeline minimizes flush penalty (4 cycles), small compared to memory access latency flush pipeline to simplify exception handling

Speculative, Out-of-Order Superscalar Processor In-Order Decode & Rename PC Fetch Reorder Buffer Commit In-Order Physical Reg. File Branch Unit Store Buffer D$ ALU MEM Execute

Superscalar Machine Efficiency Issue width Instruction issue Completely idle cycle (vertical waste) Time Partially filled cycle, i.e., IPC < 4 (horizontal waste) Why horizontal waste? Why vertical waste?

Vertical Multithreading Issue width Instruction issue Second thread interleaved cycle-by-cycle Time Partially filled cycle, i.e., IPC < 4 (horizontal waste) Cycle-by-cycle interleaving of second thread removes vertical waste

Ideal Multithreading for Superscalar Issue width Time Interleave multiple threads to multiple issue slots with no restrictions

Simultaneous Multithreading Add multiple contexts and fetch engines to wide out-of-order superscalar processor [Tullsen, Eggers, Levy, UW, 1995] OOO instruction window already has most of the circuitry required to schedule from multiple threads Any single thread can utilize whole machine

Comparison of Issue Capabilities Courtesy of Susan Eggers; Used with Permission Traditional Multithreading Single-chip Multiprocessor Superscalar SMT horizontal waste Issue slots Issue slots Issue slots Issue slots Time processor cycles) Thread 1 Thread 2 Thread 3 Thread 4 Thread 5 vertical waste Illustrates SMT thread issue & execution & how differs SS: only single thread; long latency instructions w. lots of instructions dependent FGMT: limited by amount of ILP in each thread, just as on the SS MP: each processor issues instructions from its own thread Example of one thread stalls or has little ILP Performance

From Superscalar to SMT SMT is an out-of-order superscalar extended with hardware to support multiple executing threads Threads IQ Fetch Unit Functional Units Renaming Registers ICOUNT no special HW for scheduling instructions from different threads onto FUs can use same ooo mechanism as superscalar for instruction issue: RENAMING HW eliminates false dependences both within a thread (just like a conventional SS) & between threads MAP thread-specific architectural registers in all threads onto a pool of physical registers instructions are issued when operands available without regard to thread (scheduler not look at thread IDs) thereafter called by their physical name

From Superscalar to SMT Extra pipeline stages for accessing thread-shared register files Threads IQ Fetch Unit Functional Units Renaming Registers ICOUNT 8*32 for the architecture state + 96 additional registers for register renaming

From Superscalar to SMT Fetch from the two highest throughput threads. Why? Threads IQ Fetch Unit Functional Units Renaming Registers ICOUNT Fetch unit that can keep up with the simultaneous multithreaded execution engine have the fewest instructions waiting to be executed making the best progress through the machine 40% increase in IPC over RR

From Superscalar to SMT Small items per-thread program counters per-thread return stacks per-thread bookkeeping for instruction retirement, trap & instruction dispatch queue flush thread identifiers, e.g., with BTB & TLB entries none of small stuff endangers critical path most mechanisms already exist; now duplicated for each thread or implemented to apply only to 1 thread at a time carry thread ID for retirement, trap, queue flush, not used for scheduling HW structure that points to all Is for each thread need flush mechanism for branch misprediction Fairly straightforward extension to OOO SS; this + n-fold performance boost was responsible for the technology transfer to chip manufacturers

Simultaneous Multithreaded Processor Rename Table 1 PC 1 Decode & Rename Fetch Reorder Buffer Commit OOO execution unit does not see thread identifiers, only physical register specifiers Physical Reg. File Branch Unit Store Buffer ALU MEM D$ Execute

SMT Design Issues Which thread to fetch from next? Locks Don’t want to clog instruction window with thread with many stalls 􀃎 try to fetch from thread that has fewest insts in window Locks Virtual CPU spinning on lock executes many instructions but gets nowhere 􀃎 add ISA support to lower priority of thread spinning on lock

Intel Pentium-4 Xeon Processor Hyperthreading == SMT Dual physical processors, each 2-way SMT Logical processors share nearly all resources of the physical processor Caches, execution units, branch predictors Die area overhead of hyperthreading ~ 5% When one logical processor is stalled, the other can make progress No logical processor can use all entries in queues when two threads are active A processor running only one active software thread to run at the same speed with or without hyperthreading Load-store buffer in L1 cache doesn’t behave like that, and hence 15% slowdown.