Krste Asanovic Electrical Engineering and Computer Sciences

Slides:



Advertisements
Similar presentations
Asanovic/Devadas Spring VLIW/EPIC: Statically Scheduled ILP Krste Asanovic Laboratory for Computer Science Massachusetts Institute of Technology.
Advertisements

Krste Asanovic Electrical Engineering and Computer Sciences
© Krste Asanovic, 2014CS252, Spring 2014, Lecture 5 CS252 Graduate Computer Architecture Spring 2014 Lecture 5: Out-of-Order Processing Krste Asanovic.
1 Lecture 5: Static ILP Basics Topics: loop unrolling, VLIW (Sections 2.1 – 2.2)
ENGS 116 Lecture 101 ILP: Software Approaches Vincent H. Berk October 12 th Reading for today: , 4.1 Reading for Friday: 4.2 – 4.6 Homework #2:
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture VLIW Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache III Steve Ko Computer Sciences and Engineering University at Buffalo.
POLITECNICO DI MILANO Parallelism in wonderland: are you ready to see how deep the rabbit hole goes? ILP: VLIW Architectures Marco D. Santambrogio:
CS252 Graduate Computer Architecture Spring 2014 Lecture 9: VLIW Architectures Krste Asanovic
Advanced Pipelining Optimally Scheduling Code Optimally Programming Code Scheduling for Superscalars (6.9) Exceptions (5.6, 6.8)
1 Lecture: Static ILP Topics: compiler scheduling, loop unrolling, software pipelining (Sections C.5, 3.2)
Dynamic Branch PredictionCS510 Computer ArchitecturesLecture Lecture 10 Dynamic Branch Prediction, Superscalar, VLIW, and Software Pipelining.
CS152 Lec15.1 Advanced Topics in Pipelining Loop Unrolling Super scalar and VLIW Dynamic scheduling.
Pipelining 5. Two Approaches for Multiple Issue Superscalar –Issue a variable number of instructions per clock –Instructions are scheduled either statically.
1 Advanced Computer Architecture Limits to ILP Lecture 3.
1 Lecture 10: Static ILP Basics Topics: loop unrolling, static branch prediction, VLIW (Sections 4.1 – 4.4)
Microprocessors VLIW Very Long Instruction Word Computing April 18th, 2002.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture ILP II Steve Ko Computer Sciences and Engineering University at Buffalo.
February 28, 2012CS152, Spring 2012 CS 152 Computer Architecture and Engineering Lecture 11 - Out-of-Order Issue, Register Renaming, & Branch Prediction.
ECE 552 / CPS 550 Advanced Computer Architecture I Lecture 15 Very Long Instruction Word Machines Benjamin Lee Electrical and Computer Engineering Duke.
Limits on ILP. Achieving Parallelism Techniques – Scoreboarding / Tomasulo’s Algorithm – Pipelining – Speculation – Branch Prediction But how much more.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache II Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Pipelining III Steve Ko Computer Sciences and Engineering University at Buffalo.
March 18, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 16: Vector Computers Krste Asanovic Electrical Engineering and Computer.
March 16, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 15 - VLIW Machines and Statically Scheduled ILP Krste Asanovic Electrical.
CS 152 Computer Architecture and Engineering Lecture 16 - VLIW Machines and Statically Scheduled ILP Krste Asanovic Electrical Engineering and Computer.
Krste Asanovic Electrical Engineering and Computer Sciences
CS 152 Computer Architecture and Engineering Lecture 15 - Advanced Superscalars Krste Asanovic Electrical Engineering and Computer Sciences University.
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
1 Lecture 5: Pipeline Wrap-up, Static ILP Basics Topics: loop unrolling, VLIW (Sections 2.1 – 2.2) Assignment 1 due at the start of class on Thursday.
March 9, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 12 - Advanced Out-of-Order Superscalars Krste Asanovic Electrical.
Microprocessors Introduction to ia64 Architecture Jan 31st, 2002 General Principles.
March 14, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 13 - VLIW Machines and Statically Scheduled ILP Krste Asanovic Electrical.
CS 252 Graduate Computer Architecture Lecture 6: VLIW/EPIC, Statically Scheduled ILP Krste Asanovic Electrical Engineering and Computer Sciences University.
COMP381 by M. Hamdi 1 Commercial Superscalar and VLIW Processors.
Anshul Kumar, CSE IITD CS718 : VLIW - Software Driven ILP Example Architectures 6th Apr, 2006.
Spring 2003CSE P5481 VLIW Processors VLIW (“very long instruction word”) processors instructions are scheduled by the compiler a fixed number of operations.
1 Advanced Computer Architecture Dynamic Instruction Level Parallelism Lecture 2.
CIS 662 – Computer Architecture – Fall Class 16 – 11/09/04 1 Compiler Techniques for ILP  So far we have explored dynamic hardware techniques for.
1 Lecture 7: Speculative Execution and Recovery Branch prediction and speculative execution, precise interrupt, reorder buffer.
March 1, 2012CS152, Spring 2012 CS 152 Computer Architecture and Engineering Lecture 12 - Advanced Out-of-Order Superscalars Krste Asanovic Electrical.
CENG709 Computer Architecture and Operating Systems Lecture 13 - VLIW Machines and Statically Scheduled ILP Murat Manguoglu Department of Computer Engineering.
Use of Pipelining to Achieve CPI < 1
Compiler Techniques for ILP
CS 352H: Computer Systems Architecture
Instruction Level Parallelism
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue
Simultaneous Multithreading
CS252 Graduate Computer Architecture Spring 2014 Lecture 8: Advanced Out-of-Order Superscalar Designs Part-II Krste Asanovic
COSC3330 Computer Architecture
Dr. George Michelogiannakis EECS, University of California at Berkeley
/ Computer Architecture and Design
Henk Corporaal TUEindhoven 2009
Flow Path Model of Superscalars
Lecture 8: ILP and Speculation Contd. Chapter 2, Sections 2. 6, 2
Lecture: Static ILP Topics: compiler scheduling, loop unrolling, software pipelining (Sections C.5, 3.2)
Krste Asanovic Electrical Engineering and Computer Sciences
Krste Asanovic Electrical Engineering and Computer Sciences
Lecture: Static ILP Topics: loop unrolling, software pipelines (Sections C.5, 3.2) HW3 posted, due in a week.
Henk Corporaal TUEindhoven 2011
Sampoorani, Sivakumar and Joshua
CC423: Advanced Computer Architecture ILP: Part V – Multiple Issue
CSC3050 – Computer Architecture
Dynamic Hardware Prediction
Krste Asanovic Electrical Engineering and Computer Sciences
Loop-Level Parallelism
Design of Digital Circuits Lecture 19a: VLIW
Lecture 5: Pipeline Wrap-up, Static ILP
Presentation transcript:

CS 152 Computer Architecture and Engineering Lecture 16 - VLIW Machines and Statically Scheduled ILP Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley http://www.eecs.berkeley.edu/~krste http://inst.eecs.berkeley.edu/~cs152 CS252 S05

Last time in Lecture 15 Unified physical register file machines remove data values from ROB All values only read and written during execution Only register tags held in ROB Allocate resources (ROB slot, destination physical register, memory reorder queue location) during decode Free resources on commit Speculative store buffer holds store values before commit to allow load-store forwarding Can execute later loads past earlier stores when addresses known, or predicted no dependence 4/3/2008 CS152-Spring’08 CS252 S05

Parallelism = Throughput * Latency or Little’s Law Parallelism = Throughput * Latency or Throughput per Cycle One Operation Latency in Cycles 4/3/2008 CS152-Spring’08 CS252 S05

Example Pipelined ILP Machine Two Integer Units, Single Cycle Latency Two Load/Store Units, Three Cycle Latency Two Floating-Point Units, Four Cycle Latency Max Throughput, Six Instructions per Cycle Latency in Cycles One Pipeline Stage How much instruction-level parallelism (ILP) required to keep machine pipelines busy? 4/3/2008 CS152-Spring’08 CS252 S05

Superscalar Control Logic Scaling Issue Width W Issue Group Previously Issued Instructions Lifetime L Each issued instruction must check against W*L instructions, i.e., growth in hardware  W*(W*L) For in-order machines, L is related to pipeline latencies For out-of-order machines, L also includes time spent in instruction buffers (instruction window or ROB) As W increases, larger instruction window is needed to find enough parallelism to keep machine busy => greater L => Out-of-order control logic grows faster than W2 (~W3) 4/3/2008 CS152-Spring’08 CS252 S05

Out-of-Order Control Complexity: MIPS R10000 Control Logic [ SGI/MIPS Technologies Inc., 1995 ] 4/3/2008 CS152-Spring’08 CS252 S05

Sequential ISA Bottleneck Sequential source code Superscalar compiler Find independent operations Sequential machine code Schedule operations a = foo(b); for (i=0, i< Check instruction dependencies Superscalar processor Schedule execution 4/3/2008 CS152-Spring’08 CS252 S05

VLIW: Very Long Instruction Word Int Op 2 Mem Op 1 Mem Op 2 FP Op 1 FP Op 2 Int Op 1 Two Integer Units, Single Cycle Latency Two Load/Store Units, Three Cycle Latency Two Floating-Point Units, Four Cycle Latency Multiple operations packed into one instruction Each operation slot is for a fixed function Constant operation latencies are specified Architecture requires guarantee of: Parallelism within an instruction => no cross-operation RAW check No data use before data ready => no data interlocks 4/3/2008 CS152-Spring’08 CS252 S05

VLIW Compiler Responsibilities Schedules to maximize parallel execution Guarantees intra-instruction parallelism Schedules to avoid data hazards (no interlocks) Typically separates operations with explicit NOPs 4/3/2008 CS152-Spring’08 CS252 S05

Early VLIW Machines FPS AP120B (1976) Multiflow Trace (1987) scientific attached array processor first commercial wide instruction machine hand-coded vector math libraries using software pipelining and loop unrolling Multiflow Trace (1987) commercialization of ideas from Fisher’s Yale group including “trace scheduling” available in configurations with 7, 14, or 28 operations/instruction 28 operations packed into a 1024-bit instruction word Cydrome Cydra-5 (1987) 7 operations encoded in 256-bit instruction word rotating register file 4/3/2008 CS152-Spring’08 CS252 S05

Loop Execution How many FP ops/cycle? for (i=0; i<N; i++) B[i] = A[i] + C; Int1 Int 2 M1 M2 FP+ FPx loop: add r1 ld Compile loop: ld f1, 0(r1) add r1, 8 fadd f2, f0, f1 sd f2, 0(r2) add r2, 8 bne r1, r3, loop fadd Schedule add r2 bne sd How many FP ops/cycle? 4/3/2008 CS152-Spring’08 CS252 S05

Unroll inner loop to perform 4 iterations at once Loop Unrolling for (i=0; i<N; i++) B[i] = A[i] + C; Unroll inner loop to perform 4 iterations at once for (i=0; i<N; i+=4) { B[i] = A[i] + C; B[i+1] = A[i+1] + C; B[i+2] = A[i+2] + C; B[i+3] = A[i+3] + C; } Need to handle values of N that are not multiples of unrolling factor with final cleanup loop 4/3/2008 CS152-Spring’08 CS252 S05

Scheduling Loop Unrolled Code Unroll 4 ways loop: ld f1, 0(r1) ld f2, 8(r1) ld f3, 16(r1) ld f4, 24(r1) add r1, 32 fadd f5, f0, f1 fadd f6, f0, f2 fadd f7, f0, f3 fadd f8, f0, f4 sd f5, 0(r2) sd f6, 8(r2) sd f7, 16(r2) sd f8, 24(r2) add r2, 32 bne r1, r3, loop Int1 Int 2 M1 M2 FP+ FPx loop: ld f1 ld f2 ld f3 add r1 ld f4 fadd f5 Schedule fadd f6 fadd f7 fadd f8 sd f5 sd f6 sd f7 add r2 bne sd f8 How many FLOPS/cycle? 4/3/2008 CS152-Spring’08 CS252 S05

Software Pipelining Unroll 4 ways first How many FLOPS/cycle? ld f1 Int1 Int 2 M1 M2 FP+ FPx Unroll 4 ways first ld f1 ld f2 ld f3 ld f4 fadd f5 fadd f6 fadd f7 fadd f8 sd f5 sd f6 sd f7 sd f8 add r1 add r2 bne loop: ld f1, 0(r1) ld f2, 8(r1) ld f3, 16(r1) ld f4, 24(r1) add r1, 32 fadd f5, f0, f1 fadd f6, f0, f2 fadd f7, f0, f3 fadd f8, f0, f4 sd f5, 0(r2) sd f6, 8(r2) sd f7, 16(r2) add r2, 32 sd f8, -8(r2) bne r1, r3, loop loop: iterate prolog epilog ld f1 ld f2 ld f3 ld f4 fadd f5 fadd f6 fadd f7 fadd f8 sd f5 sd f6 sd f7 sd f8 add r1 add r2 bne ld f1 ld f2 ld f3 ld f4 fadd f5 fadd f6 fadd f7 fadd f8 sd f5 add r1 How many FLOPS/cycle? 4/3/2008 CS152-Spring’08 CS252 S05

Software Pipelining vs. Loop Unrolling Loop Unrolled Wind-down overhead performance Startup overhead time Loop Iteration Software Pipelined performance time Loop Iteration Software pipelining pays startup/wind-down costs only once per loop, not once per iteration 4/3/2008 CS152-Spring’08 CS252 S05

What if there are no loops? Basic block Branches limit basic block size in control-flow intensive irregular code Difficult to find ILP in individual basic blocks 4/3/2008 CS152-Spring’08 CS252 S05

Trace Scheduling [ Fisher,Ellis] Pick string of basic blocks, a trace, that represents most frequent branch path Use profiling feedback or compiler heuristics to find common branch paths Schedule whole “trace” at once Add fixup code to cope with branches jumping out of trace 4/3/2008 CS152-Spring’08 CS252 S05

Problems with “Classic” VLIW Object-code compatibility have to recompile all code for every machine, even for two machines in same generation Object code size instruction padding wastes instruction memory/cache loop unrolling/software pipelining replicates code Scheduling variable latency memory operations caches and/or memory bank conflicts impose statically unpredictable variability Knowing branch probabilities Profiling requires an significant extra step in build process Scheduling for statically unpredictable branches optimal schedule varies with branch path 4/3/2008 CS152-Spring’08 CS252 S05

VLIW Instruction Encoding Group 1 Group 2 Group 3 Schemes to reduce effect of unused fields Compressed format in memory, expand on I-cache refill used in Multiflow Trace introduces instruction addressing challenge Mark parallel groups used in TMS320C6x DSPs, Intel IA-64 Provide a single-op VLIW instruction Cydra-5 UniOp instructions 4/3/2008 CS152-Spring’08 CS252 S05

Rotating Register Files Problems: Scheduled loops require lots of registers, Lots of duplicated code in prolog, epilog ld r1, () add r2, r1, #1 st r2, () ld r1, () add r2, r1, #1 st r2, () ld r1, () add r2, r1, #1 st r2, () ld r1, () add r2, r1, #1 st r2, () Prolog Epilog Loop Solution: Allocate new set of registers for each loop iteration 4/3/2008 CS152-Spring’08 CS252 S05

Rotating Register File P0 P1 P2 P3 P4 P5 P6 P7 RRB=3 R1 + Rotating Register Base (RRB) register points to base of current register set. Value added on to logical register specifier to give physical register number. Usually, split into rotating and non-rotating registers. dec RRB bloop ld r1, () add r3, r2, #1 st r4, () add r2, r1, #1 Prolog Epilog Loop Loop closing branch decrements RRB 4/3/2008 CS152-Spring’08 CS252 S05

Rotating Register File (Previous Loop Example) Three cycle load latency encoded as difference of 3 in register specifier number (f4 - f1 = 3) Four cycle fadd latency encoded as difference of 4 in register specifier number (f9 – f5 = 4) bloop sd f9, () fadd f5, f4, ... ld f1, () bloop sd P17, () fadd P13, P12, ld P9, () RRB=8 bloop sd P16, () fadd P12, P11, ld P8, () RRB=7 bloop sd P15, () fadd P11, P10, ld P7, () RRB=6 bloop sd P14, () fadd P10, P9, ld P6, () RRB=5 bloop sd P13, () fadd P9, P8, ld P5, () RRB=4 bloop sd P12, () fadd P8, P7, ld P4, () RRB=3 bloop sd P11, () fadd P7, P6, ld P3, () RRB=2 bloop sd P10, () fadd P6, P5, ld P2, () RRB=1 4/3/2008 CS152-Spring’08 CS252 S05

Cydra-5: Memory Latency Register (MLR) Problem: Loads have variable latency Solution: Let software choose desired memory latency Compiler schedules code for maximum load-use distance Software sets MLR to latency that matches code schedule Hardware ensures that loads take exactly MLR cycles to return values into processor pipeline Hardware buffers loads that return early Hardware stalls processor if loads return late 4/3/2008 CS152-Spring’08 CS252 S05

CS152 Administrivia New shifted schedule - see website for details Lab 4, PS 4, due Tuesday April 8 PRIZE (TBD) for winners in both unlimited and realistic categories of branch predictor contest Quiz 4, Thursday April 10 Quiz 5, Thursday April 24 Quiz 6, Thursday May 8 (last day of class) 4/3/2008 CS152-Spring’08 CS252 S05

Intel EPIC IA-64 EPIC is the style of architecture (cf. CISC, RISC) Explicitly Parallel Instruction Computing IA-64 is Intel’s chosen ISA (cf. x86, MIPS) IA-64 = Intel Architecture 64-bit An object-code compatible VLIW Itanium (aka Merced) is first implementation (cf. 8086) First customer shipment expected 1997 (actually 2001) McKinley, second implementation shipped in 2002 Recent version, Tukwila 2008, quad-cores, 65nm 4/3/2008 CS152-Spring’08 CS252 S05

Quad Core Itanium “Tukwila” [Intel 2008] 4 cores 6MB $/core, 24MB $ total ~2.0 GHz 698mm2 in 65nm CMOS!!!!! 170W Over 2 billion transistors 4/3/2008 CS152-Spring’08 CS252 S05

IA-64 Instruction Format Template 128-bit instruction bundle Template bits describe grouping of these instructions with others in adjacent bundles Each group contains instructions that can execute in parallel bundle j-1 bundle j bundle j+1 bundle j+2 group i-1 group i group i+1 group i+2 4/3/2008 CS152-Spring’08 CS252 S05

IA-64 Registers 128 General Purpose 64-bit Integer Registers 128 General Purpose 64/80-bit Floating Point Registers 64 1-bit Predicate Registers GPRs rotate to reduce code size for software pipelined loops 4/3/2008 CS152-Spring’08 CS252 S05

IA-64 Predicated Execution Problem: Mispredicted branches limit ILP Solution: Eliminate hard to predict branches with predicated execution Almost all IA-64 instructions can be executed conditionally under predicate Instruction becomes NOP if predicate register false Inst 1 Inst 2 br a==b, b2 Inst 3 Inst 4 br b3 Inst 5 Inst 6 Inst 7 Inst 8 b0: b1: b2: b3: if else then Four basic blocks Inst 1 Inst 2 p1,p2 <- cmp(a==b) (p1) Inst 3 || (p2) Inst 5 (p1) Inst 4 || (p2) Inst 6 Inst 7 Inst 8 Predication One basic block Mahlke et al, ISCA95: On average >50% branches removed 4/3/2008 CS152-Spring’08 CS252 S05

Predicate Software Pipeline Stages Single VLIW Instruction (p1) bloop (p3) st r4 (p2) add r3 (p1) ld r1 Dynamic Execution (p1) ld r1 (p2) add r3 (p3) st r4 (p1) bloop Software pipeline stages turned on by rotating predicate registers  Much denser encoding of loops 4/3/2008 CS152-Spring’08 CS252 S05

Fully Bypassed Datapath ASrc IR PC A B Y R MD1 MD2 addr inst Inst Memory 0x4 Add ALU Imm Ext rd1 GPRs rs1 rs2 ws wd rd2 we wdata rdata Data 31 nop stall D E M W PC for JAL, ... BSrc This slide should be redone with predicates… Where does predication fit in? 4/3/2008 CS152-Spring’08 CS252 S05

IA-64 Speculative Execution Problem: Branches restrict compiler code motion Solution: Speculative operations that don’t cause exceptions Load.s r1 Inst 1 Inst 2 br a==b, b2 Chk.s r1 Use r1 Inst 3 Speculative load never causes exception, but sets “poison” bit on destination register Check for exception in original home block jumps to fixup code if exception detected Inst 1 Inst 2 br a==b, b2 Load r1 Use r1 Inst 3 Can’t move load above branch because might cause spurious exception Particularly useful for scheduling long latency loads early 4/3/2008 CS152-Spring’08 CS252 S05

IA-64 Data Speculation Problem: Possible memory hazards limit code scheduling Solution: Hardware to check pointer hazards Load.a r1 Inst 1 Inst 2 Store Load.c Use r1 Inst 3 Data speculative load adds address to address check table Store invalidates any matching loads in address check table Check if load invalid (or missing), jump to fixup code if so Inst 1 Inst 2 Store Load r1 Use r1 Inst 3 Can’t move load above store because store might be to same address Requires associative hardware in address check table 4/3/2008 CS152-Spring’08 CS252 S05

Clustered VLIW Divide machine into clusters of local register files and local functional units Lower bandwidth/higher latency interconnect between clusters Software responsible for mapping computations to minimize communication overhead Common in commercial embedded VLIW processors, e.g., TI C6x DSPs, HP Lx processor (Same idea used in some superscalar processors, e.g., Alpha 21264) Cluster Interconnect Local Regfile Local Regfile Cluster Memory Interconnect Cache/Memory Banks 4/3/2008 CS152-Spring’08 CS252 S05

Limits of Static Scheduling Unpredictable branches Variable memory latency (unpredictable cache misses) Code size explosion Compiler complexity 4/3/2008 CS152-Spring’08 CS252 S05

Acknowledgements These slides contain material developed and copyright by: Arvind (MIT) Krste Asanovic (MIT/UCB) Joel Emer (Intel/MIT) James Hoe (CMU) John Kubiatowicz (UCB) David Patterson (UCB) MIT material derived from course 6.823 UCB material derived from course CS252 4/3/2008 CS152-Spring’08 CS252 S05