EECS 252 Graduate Computer Architecture Lecture 2 0 Review of Instruction Sets, Pipelines, and Caches January 24th, 2011 John Kubiatowicz Electrical.

Slides:



Advertisements
Similar presentations
CMSC 611: Advanced Computer Architecture Pipelining Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
Advertisements

Review: Pipelining. Pipelining Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer.
CS252/Patterson Lec 1.1 1/17/01 Pipelining: Its Natural! Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer.
COMP381 by M. Hamdi 1 Pipeline Hazards. COMP381 by M. Hamdi 2 Pipeline Hazards Hazards are situations in pipelining where one instruction cannot immediately.
CPSC 614 Computer Architecture Lec 3 Pipeline Review EJ Kim Dept. of Computer Science Texas A&M University Adapted from CS 252 Spring 2006 UC Berkeley.
CSCE 430/830 Computer Architecture Basic Pipelining & Performance
Mary Jane Irwin ( ) [Adapted from Computer Organization and Design,
Chapter 5 Pipelining and Hazards
©UCB CS 162 Computer Architecture Lecture 3: Pipelining Contd. Instructor: L.N. Bhuyan
Computer ArchitectureFall 2007 © October 24nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
Computer ArchitectureFall 2007 © October 22nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
Appendix A Pipelining: Basic and Intermediate Concepts
ENGS 116 Lecture 51 Pipelining and Hazards Vincent H. Berk September 30, 2005 Reading for today: Chapter A.1 – A.3, article: Patterson&Ditzel Reading for.
Pipeline Hazard CT101 – Computing Systems. Content Introduction to pipeline hazard Structural Hazard Data Hazard Control Hazard.
Pipelining. 10/19/ Outline 5 stage pipelining Structural and Data Hazards Forwarding Branch Schemes Exceptions and Interrupts Conclusion.
CPE 731 Advanced Computer Architecture Pipelining Review Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of California,
EECS 252 Graduate Computer Architecture Lecture 3  0 (continued) Review of Instruction Sets, Pipelines, Caches and Virtual Memory January 25 th, 2012.
Appendix A - Pipelining CSCI/ EENG – W01 Computer Architecture 1 Prof. Babak Beheshti Slides based on the PowerPoint Presentations created by David.
Pipeline Review. 2 Review from last lecture Tracking and extrapolating technology part of architect’s responsibility Expect Bandwidth in disks, DRAM,
CSC 7080 Graduate Computer Architecture Lec 3 – Pipelining: Basic and Intermediate Concepts (Appendix A) Dr. Khalaf Notes adapted from: David Patterson.
EEL5708 Lotzi Bölöni EEL 5708 High Performance Computer Architecture Pipelining.
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
CMPE 421 Parallel Computer Architecture
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
Yiorgos Makris Professor Department of Electrical Engineering University of Texas at Dallas EE (CE) 6304 Computer Architecture Lecture #4 (9/3/15) Course.
HazardsCS510 Computer Architectures Lecture Lecture 7 Pipeline Hazards.
CS252/Patterson Lec 1.1 1/17/01 معماري کامپيوتر - درس نهم pipeline برگرفته از درس : Prof. David A. Patterson.
HazardsCS510 Computer Architectures Lecture Lecture 7 Pipeline Hazards.
CS203 – Advanced Computer Architecture Pipelining Review.
Computer Organization
Lecture 15: Pipelining: Branching & Complications
Review: Instruction Set Evolution
Pipelining: Hazards Ver. Jan 14, 2014
CDA 3101 Spring 2016 Introduction to Computer Organization
CS 3853 Computer Architecture Lecture 3 – Performance + Pipelining
HY425 – Αρχιτεκτονική Υπολογιστών Διάλεξη 03
CMSC 611: Advanced Computer Architecture
CPE 631 Review: Pipelining
5 Steps of MIPS Datapath Figure A.2, Page A-8
EECS 252 Graduate Computer Architecture Lecture 2 0 Review of Instruction Sets, Pipelines, and Caches January 25th, 2010 John Kubiatowicz Electrical.
EE (CE) 6304 Computer Architecture Lecture #4 (8/30/17)
Appendix C Pipeline implementation
Chapter 4 The Processor Part 4
Appendix A - Pipelining
School of Computing and Informatics Arizona State University
Chapter 3: Pipelining 순천향대학교 컴퓨터학부 이 상 정 Adapted from
CPE 631 Lecture 03: Review: Pipelining, Memory Hierarchy
David Patterson Electrical Engineering and Computer Sciences
CSCE 430/830 Computer Architecture Basic Pipelining & Performance
Pipelining: Advanced ILP
CMSC 611: Advanced Computer Architecture
Appendix A: Pipelining
Larry Wittie Computer Science, StonyBrook University and ~lw
CMPE 382 / ECE 510 Computer Organization & Architecture Appendix A- Pipelining based on text: Computer Architecture : A Quantitative Approach (Paperback)
CSC 4250 Computer Architectures
The Processor Lecture 3.6: Control Hazards
Instruction Execution Cycle
Electrical and Computer Engineering
Overview What are pipeline hazards? Types of hazards
CS203 – Advanced Computer Architecture
Pipelining Appendix A and Chapter 3.
Introduction to Computer Organization and Architecture
Throughput = #instructions per unit time (seconds/cycles etc.)
Guest Lecturer: Justin Hsia
Advanced Computer Architecture 2019
Pipelining Hazards.
Presentation transcript:

EECS 252 Graduate Computer Architecture Lecture 2 0 Review of Instruction Sets, Pipelines, and Caches January 24th, 2011 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley http://www.eecs.berkeley.edu/~kubitron/cs252 CS252 S05

Review: Moore’s Law “Cramming More Components onto Integrated Circuits” Gordon Moore, Electronics, 1965 # on transistors on cost-effective integrated circuit double every 18 months 1/24/2011 CS252-S11, Lecture 02

Review: Limiting Forces Chip density is continuing increase ~2x every 2 years Clock speed is not # processors/chip (cores) may double instead There is little or no more Instruction Level Parallelism (ILP) to be found Can no longer allow programmer to think in terms of a serial programming model Conclusion: Parallelism must be exposed to software! Source: Intel, Microsoft (Sutter) and Stanford (Olukotun, Hammond) 1/19/2011 CS252-S11, Lecture 01

Examples of MIMD Machines Bus Memory Symmetric Multiprocessor Multiple processors in box with shared memory communication Current MultiCore chips like this Every processor runs copy of OS Non-uniform shared-memory with separate I/O through host Multiple processors Each with local memory general scalable network Extremely light “OS” on node provides simple services Scheduling/synchronization Network-accessible host for I/O Cluster Many independent machine connected with general network Communication through messages P/M Host Network 1/19/2011 CS252-S11, Lecture 01 CS252 S05

Categories of Thread Execution Simultaneous Multithreading Superscalar Fine-Grained Coarse-Grained Multiprocessing Time (processor cycle) Thread 1 Thread 3 Thread 5 Thread 2 Thread 4 Idle slot 1/19/2011 CS252-S11, Lecture 01

“Bell’s Law” – new class per decade Number Crunching Data Storage log (people per computer) productivity interactive streaming information to/from physical world So if we look at the computing spectrum today, each of the classes exist simultaneously. In a room we can put 100s of teraflops and many petabytes of computing. Our productivity, document preparation, and personal information management fits in our pocket and a new class is emerging that will provide a means of streaming information to and from the physical world like we have never seen before. Enabled by technological opportunities Smaller, more numerous and more intimately connected Brings in a new kind of application Used in many ways not previously imagined year 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Today: Quick review of everything you should have learned 0 ( A countably-infinite set of computer architecture concepts ) 1/24/2011 CS252-S11, Lecture 02

Metrics used to Compare Designs Cost Die cost and system cost Execution Time average and worst-case Latency vs. Throughput Energy and Power Also peak power and peak switching current Reliability Resiliency to electrical noise, part failure Robustness to bad software, operator error Maintainability System administration costs Compatibility Software costs dominate 1/24/2011 CS252-S11, Lecture 02 CS252 S05

What is Performance? Latency (or response time or execution time) time to complete one task Bandwidth (or throughput) tasks completed per unit time 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Definition: Performance Performance is in units of things per sec bigger is better If we are primarily concerned with response time performance(x) = 1 execution_time(x) " X is n times faster than Y" means Performance(X) Execution_time(Y) n = = Performance(Y) Execution_time(X) 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Performance: What to measure Usually rely on benchmarks vs. real workloads To increase predictability, collections of benchmark applications-- benchmark suites -- are popular SPECCPU: popular desktop benchmark suite CPU only, split between integer and floating point programs SPECint2000 has 12 integer, SPECfp2000 has 14 integer pgms SPECCPU2006 to be announced Spring 2006 SPECSFS (NFS file server) and SPECWeb (WebServer) added as server benchmarks Transaction Processing Council measures server performance and cost-performance for databases TPC-C Complex query for Online Transaction Processing TPC-H models ad hoc decision support TPC-W a transactional web benchmark TPC-App application server and web services benchmark 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Summarizing Performance System Rate (Task 1) Rate (Task 2) A 10 20 B Which system is faster? 1/24/2011 CS252-S11, Lecture 02 CS252 S05

… depends who’s selling System Rate (Task 1) Rate (Task 2) A 10 20 B Average 15 Average throughput System Rate (Task 1) Rate (Task 2) A 0.50 2.00 B 1.00 Average 1.25 Throughput relative to B System Rate (Task 1) Rate (Task 2) A 1.00 B 2.00 0.50 Average 1.25 Throughput relative to A 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Summarizing Performance over Set of Benchmark Programs   1/24/2011 CS252-S11, Lecture 02 CS252 S05

Normalized Execution Time and Geometric Mean   1/24/2011 CS252-S11, Lecture 02 CS252 S05

Vector/Superscalar Speedup 100 MHz Cray J90 vector machine versus 300MHz Alpha 21164 [LANL Computational Physics Codes, Wasserman, ICS’96] Vector machine peaks on a few codes???? 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Superscalar/Vector Speedup 100 MHz Cray J90 vector machine versus 300MHz Alpha 21164 [LANL Computational Physics Codes, Wasserman, ICS’96] Scalar machine peaks on one code??? 1/24/2011 CS252-S11, Lecture 02 CS252 S05

How to Mislead with Performance Reports Select pieces of workload that work well on your design, ignore others Use unrealistic data set sizes for application (too big or too small) Report throughput numbers for a latency benchmark Report latency numbers for a throughput benchmark Report performance on a kernel and claim it represents an entire application Use 16-bit fixed-point arithmetic (because it’s fastest on your system) even though application requires 64-bit floating-point arithmetic Use a less efficient algorithm on the competing machine Report speedup for an inefficient algorithm (bubblesort) Compare hand-optimized assembly code with unoptimized C code Compare your design using next year’s technology against competitor’s year old design (1% performance improvement per week) Ignore the relative cost of the systems being compared Report averages and not individual results Report speedup over unspecified base system, not absolute times Report efficiency not absolute times Report MFLOPS not absolute times (use inefficient algorithm) [ David Bailey “Twelve ways to fool the masses when giving performance results for parallel supercomputers” ] 1/24/2011 CS252-S11, Lecture 02 CS252 S05

CS 252 Administrivia Sign up! Web site is: http://www.cs.berkeley.edu/~kubitron/cs252 Review: Chapter 1, Appendix A, B, C CS 152 home page, maybe “Computer Organization and Design (COD)2/e” If did take a class, be sure COD Chapters 2, 5, 6, 7 are familiar Copies in Bechtel Library on 2-hour reserve Resources for course on web site: Check out the ISCA (International Symposium on Computer Architecture) 25th year retrospective on web site. Look for “Additional reading” below text-book description Pointers to previous CS152 exams and resources Lots of old CS252 material Interesting links. Check out the: WWW Computer Architecture Home Page 1/24/2011 CS252-S11, Lecture 02 CS252 S05

CS 252 Administrivia First two readings are up (look on Lecture page) Read the assignment carefully, since the requirements vary about what you need to turn in Submit results to website before class (will be a link up on handouts page) You can have 5 total late days on assignments 10% per day afterwards Save late days! 1/24/2011 CS252-S11, Lecture 02

Amdahl’s Law Best you could ever hope to do: 1/24/2011 CS252-S11, Lecture 02

Amdahl’s Law example New CPU 10X faster I/O bound server, so 60% time waiting for I/O Apparently, its human nature to be attracted by 10X faster, vs. keeping in perspective its just 1.6X faster 1/24/2011 CS252-S11, Lecture 02

Computer Performance Inst Count CPI Clock Rate Program X Cycle time CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle Inst Count CPI Clock Rate Program X Compiler X (X) Inst. Set. X X Organization X X Technology X 1/24/2011 CS252-S11, Lecture 02

Cycles Per Instruction (Throughput) “Average Cycles per Instruction” CPI = (CPU Time * Clock Rate) / Instruction Count = Cycles / Instruction Count “Instruction Frequency” 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Example: Calculating CPI bottom up Run benchmark and collect workload characterization (simulate, machine counters, or sampling) Base Machine (Reg / Reg) Op Freq Cycles CPI(i) (% Time) ALU 50% 1 .5 (33%) Load 20% 2 .4 (27%) Store 10% 2 .2 (13%) Branch 20% 2 .4 (27%) 1.5 Typical Mix of instruction types in program Design guideline: Make the common case fast MIPS 1% rule: only consider adding an instruction of it is shown to add 1% performance improvement on reasonable benchmarks. 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Power and Energy Energy to complete operation (Joules) Corresponds approximately to battery life (Battery energy capacity actually depends on rate of discharge) Peak power dissipation (Watts = Joules/second) Affects packaging (power and ground pins, thermal design) di/dt, peak change in supply current (Amps/second) Affects power supply noise (power and ground pins, decoupling capacitors) 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Peak Power versus Lower Energy Peak A Peak B Power Integrate power curve to get energy Time System A has higher peak power, but lower total energy System B has lower peak power, but higher total energy 1/24/2011 CS252-S11, Lecture 02 CS252 S05

ISA Implementation Review 1/24/2011 CS252-S11, Lecture 02

A "Typical" RISC ISA 32-bit fixed format instruction (3 formats) 32 32-bit GPR (R0 contains zero, DP take pair) 3-address, reg-reg arithmetic instruction Single address mode for load/store: base + displacement no indirection Simple branch conditions Delayed branch see: SPARC, MIPS, HP PA-Risc, DEC Alpha, IBM PowerPC, CDC 6600, CDC 7600, Cray-1, Cray-2, Cray-3 1/24/2011 CS252-S11, Lecture 02

Example: MIPS (­ MIPS) Register-Register Op Rs1 Rs2 Rd Opx 31 26 25 21 20 16 15 11 10 6 5 Op Rs1 Rs2 Rd Opx Register-Immediate 31 26 25 21 20 16 15 immediate Op Rs1 Rd Branch 31 26 25 21 20 16 15 immediate Op Rs1 Rs2/Opx Jump / Call 31 26 25 target Op 1/24/2011 CS252-S11, Lecture 02

Datapath vs Control Datapath Controller signals Control Points Datapath: Storage, FU, interconnect sufficient to perform the desired functions Inputs are Control Points Outputs are signals Controller: State machine to orchestrate operation on the data path Based on desired function and signals 1/24/2011 CS252-S11, Lecture 02

Simple Pipelining Review 1/24/2011 CS252-S11, Lecture 02

5 Steps of MIPS Datapath 4 Data stationary control Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc Memory Access Write Back Next PC IF/ID ID/EX MEM/WB EX/MEM MUX Next SEQ PC Next SEQ PC 4 Adder Zero? RS1 Reg File Address Memory MUX RS2 ALU Memory Data MUX MUX IR <= mem[PC]; PC <= PC + 4 Sign Extend Imm WB Data A <= Reg[IRrs]; B <= Reg[IRrt] RD RD RD rslt <= A opIRop B WB <= rslt Data stationary control local decode for each instruction phase / pipeline stage Reg[IRrd] <= WB 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Visualizing Pipelining Figure A.2, Page A-8 Time (clock cycles) Reg ALU DMem Ifetch Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 6 Cycle 7 Cycle 5 I n s t r. O r d e 1/24/2011 CS252-S11, Lecture 02

Pipelining is not quite that easy! Limits to pipelining: Hazards prevent next instruction from executing during its designated clock cycle Structural hazards: HW cannot support this combination of instructions (single person to fold and put clothes away) Data hazards: Instruction depends on result of prior instruction still in the pipeline (missing sock) Control hazards: Caused by delay between the fetching of instructions and decisions about changes in control flow (branches and jumps). 1/24/2011 CS252-S11, Lecture 02

One Memory Port/Structural Hazards Figure A.4, Page A-14 Time (clock cycles) Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 ALU I n s t r. O r d e Load Ifetch Reg DMem Reg Reg ALU DMem Ifetch Instr 1 Reg ALU DMem Ifetch Instr 2 ALU Instr 3 Ifetch Reg DMem Reg Reg ALU DMem Ifetch Instr 4 1/24/2011 CS252-S11, Lecture 02

One Memory Port/Structural Hazards (Similar to Figure A.5, Page A-15) Time (clock cycles) Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 ALU I n s t r. O r d e Load Ifetch Reg DMem Reg Reg ALU DMem Ifetch Instr 1 Reg ALU DMem Ifetch Instr 2 Bubble Stall Reg ALU DMem Ifetch Instr 3 How do you “bubble” the pipe? 1/24/2011 CS252-S11, Lecture 02

Speed Up Equation for Pipelining For simple RISC pipeline, CPI = 1: 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Example: Dual-port vs. Single-port Machine A: Dual ported memory (“Harvard Architecture”) Machine B: Single ported memory, but its pipelined implementation has a 1.05 times faster clock rate Ideal CPI = 1 for both Loads are 40% of instructions executed SpeedUpA = Pipeline Depth/(1 + 0) x (clockunpipe/clockpipe) = Pipeline Depth SpeedUpB = Pipeline Depth/(1 + 0.4 x 1) x (clockunpipe/(clockunpipe / 1.05) = (Pipeline Depth/1.4) x 1.05 = 0.75 x Pipeline Depth SpeedUpA / SpeedUpB = Pipeline Depth/(0.75 x Pipeline Depth) = 1.33 Machine A is 1.33 times faster 1/24/2011 CS252-S11, Lecture 02

Data Hazard on R1 add r1,r2,r3 sub r4,r1,r3 and r6,r1,r7 or r8,r1,r9 Time (clock cycles) IF ID/RF EX MEM WB I n s t r. O r d e add r1,r2,r3 sub r4,r1,r3 and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 Reg ALU DMem Ifetch 1/24/2011 CS252-S11, Lecture 02

Three Generic Data Hazards Read After Write (RAW) InstrJ tries to read operand before InstrI writes it Caused by a “Dependence” (in compiler nomenclature). This hazard results from an actual need for communication. I: add r1,r2,r3 J: sub r4,r1,r3 1/24/2011 CS252-S11, Lecture 02

Three Generic Data Hazards Write After Read (WAR) InstrJ writes operand before InstrI reads it Called an “anti-dependence” by compiler writers. This results from reuse of the name “r1”. Can’t happen in MIPS 5 stage pipeline because: All instructions take 5 stages, and Reads are always in stage 2, and Writes are always in stage 5 I: sub r4,r1,r3 J: add r1,r2,r3 K: mul r6,r1,r7 1/24/2011 CS252-S11, Lecture 02

Three Generic Data Hazards Write After Write (WAW) InstrJ writes operand before InstrI writes it. Called an “output dependence” by compiler writers This also results from the reuse of name “r1”. Can’t happen in MIPS 5 stage pipeline because: All instructions take 5 stages, and Writes are always in stage 5 Will see WAR and WAW in more complicated pipes I: sub r1,r4,r3 J: add r1,r2,r3 K: mul r6,r1,r7 1/24/2011 CS252-S11, Lecture 02

Forwarding to Avoid Data Hazard Time (clock cycles) I n s t r. O r d e add r1,r2,r3 sub r4,r1,r3 and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 Reg ALU DMem Ifetch 1/24/2011 CS252-S11, Lecture 02

HW Change for Forwarding ID/EX EX/MEM MEM/WR NextPC mux Registers ALU Data Memory mux mux Immediate What circuit detects and resolves this hazard? 1/24/2011 CS252-S11, Lecture 02

Forwarding to Avoid LW-SW Data Hazard Time (clock cycles) I n s t r. O r d e add r1,r2,r3 lw r4, 0(r1) sw r4,12(r1) or r8,r6,r9 xor r10,r9,r11 Reg ALU DMem Ifetch 1/24/2011 CS252-S11, Lecture 02

Data Hazard Even with Forwarding Time (clock cycles) Reg ALU DMem Ifetch I n s t r. O r d e lw r1, 0(r2) sub r4,r1,r6 and r6,r1,r7 or r8,r1,r9 MIPS actutally didn’t interlecok: MPU without Interlocked Pipelined Stages 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Data Hazard Even with Forwarding Time (clock cycles) Reg ALU DMem Ifetch lw r1, 0(r2) I n s t r. O r d e Reg Ifetch ALU DMem Bubble sub r4,r1,r6 Ifetch ALU DMem Reg Bubble and r6,r1,r7 Bubble Ifetch Reg ALU DMem or r8,r1,r9 1/24/2011 CS252-S11, Lecture 02

Software Scheduling to Avoid Load Hazards Try producing fast code for a = b + c; d = e – f; assuming a, b, c, d ,e, and f in memory. Slow code: LW Rb,b LW Rc,c ADD Ra,Rb,Rc SW a,Ra LW Re,e LW Rf,f SUB Rd,Re,Rf SW d,Rd Fast code: LW Rb,b LW Rc,c LW Re,e ADD Ra,Rb,Rc LW Rf,f SW a,Ra SUB Rd,Re,Rf SW d,Rd 1/24/2011 CS252-S11, Lecture 02

Control Hazard on Branches Three Stage Stall Reg ALU DMem Ifetch 10: beq r1,r3,36 14: and r2,r3,r5 18: or r6,r1,r7 22: add r8,r1,r9 36: xor r10,r1,r11 What do you do with the 3 instructions in between? How do you do it? Where is the “commit”? 1/24/2011 CS252-S11, Lecture 02

Branch Stall Impact If CPI = 1, 30% branch, Stall 3 cycles => new CPI = 1.9! Two part solution: Determine branch taken or not sooner, AND Compute taken branch address earlier MIPS branch tests if register = 0 or  0 MIPS Solution: Move Zero test to ID/RF stage Adder to calculate new PC in ID/RF stage 1 clock cycle penalty for branch versus 3 1/24/2011 CS252-S11, Lecture 02

Pipelined MIPS Datapath Figure A.24, page A-38 Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc Memory Access Write Back Next PC Next SEQ PC ID/EX EX/MEM MEM/WB MUX 4 Adder IF/ID Adder Zero? RS1 Address Reg File Memory RS2 ALU Memory Data MUX MUX Sign Extend Imm WB Data RD RD RD Interplay of instruction set design and cycle time. 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Four Branch Hazard Alternatives #1: Stall until branch direction is clear #2: Predict Branch Not Taken Execute successor instructions in sequence “Squash” instructions in pipeline if branch actually taken Advantage of late pipeline state update 47% MIPS branches not taken on average PC+4 already calculated, so use it to get next instruction #3: Predict Branch Taken 53% MIPS branches taken on average But haven’t calculated branch target address in MIPS MIPS still incurs 1 cycle branch penalty Other machines: branch target known before outcome 1/24/2011 CS252-S11, Lecture 02

Four Branch Hazard Alternatives #4: Delayed Branch Define branch to take place AFTER a following instruction branch instruction sequential successor1 sequential successor2 ........ sequential successorn branch target if taken 1 slot delay allows proper decision and branch target address in 5 stage pipeline MIPS uses this Branch delay of length n 1/24/2011 CS252-S11, Lecture 02

Scheduling Branch Delay Slots A. From before branch B. From branch target C. From fall through add $1,$2,$3 if $2=0 then add $1,$2,$3 if $1=0 then sub $4,$5,$6 delay slot delay slot add $1,$2,$3 if $1=0 then delay slot sub $4,$5,$6 becomes becomes becomes if $2=0 then add $1,$2,$3 add $1,$2,$3 if $1=0 then sub $4,$5,$6 add $1,$2,$3 if $1=0 then sub $4,$5,$6 Limitations on delayed-branch scheduling come from 1) restrictions on the instructions that can be moved/copied into the delay slot and 2) limited ability to predict at compile time whether a branch is likely to be taken or not. In B and C, the use of $1 prevents the add instruction from being moved to the delay slot In B the sub may need to be copied because it could be reached by another path. B is preferred when the branch is taken with high probability (such as loop branches A is the best choice, fills delay slot & reduces instruction count (IC) In B, the sub instruction may need to be copied, increasing IC In B and C, must be okay to execute sub when branch fails 1/24/2011 CS252-S11, Lecture 02 CS252 S05

Delayed Branch Compiler effectiveness for single branch delay slot: Fills about 60% of branch delay slots About 80% of instructions executed in branch delay slots useful in computation About 50% (60% x 80%) of slots usefully filled Delayed Branch downside: As processor go to deeper pipelines and multiple issue, the branch delay grows and need more than one delay slot Delayed branching has lost popularity compared to more expensive but more flexible dynamic approaches Growth in available transistors has made dynamic approaches relatively cheaper 1/24/2011 CS252-S11, Lecture 02

Evaluating Branch Alternatives Assume 4% unconditional branch, 6% conditional branch- untaken, 10% conditional branch-taken Scheduling Branch CPI speedup v. speedup v. scheme penalty unpipelined stall Stall pipeline 3 1.60 3.1 1.0 Predict taken 1 1.20 4.2 1.33 Predict not taken 1 1.14 4.4 1.40 Delayed branch 0.5 1.10 4.5 1.45 1/24/2011 CS252-S11, Lecture 02

Problems with Pipelining Exception: An unusual event happens to an instruction during its execution Examples: divide by zero, undefined opcode Interrupt: Hardware signal to switch the processor to a new instruction stream Example: a sound card interrupts when it needs more audio output samples (an audio “click” happens if it is left waiting) Problem: It must appear that the exception or interrupt must appear between 2 instructions (Ii and Ii+1) The effect of all instructions up to and including Ii is totalling complete No effect of any instruction after Ii can take place The interrupt (exception) handler either aborts program or restarts at instruction Ii+1 1/24/2011 CS252-S11, Lecture 02

Precise Exceptions in Static Pipelines Key observation: architected state only change in memory and register write stages. 1/24/2011 CS252-S11, Lecture 02

Summary: Control and Pipelining Next time: Read Appendix A Control VIA State Machines and Microprogramming Just overlap tasks; easy if tasks are independent Speed Up  Pipeline Depth; if ideal CPI is 1, then: Hazards limit performance on computers: Structural: need more HW resources Data (RAW,WAR,WAW): need forwarding, compiler scheduling Control: delayed branch, prediction Exceptions, Interrupts add complexity Next time: Read Appendix C, record bugs online! 1/24/2011 CS252-S11, Lecture 02