Lecture 5. MIPS Processor Design Pipelined MIPS #1 Prof. Taeweon Suh Computer Science & Engineering Korea University COSE222, COMP212 Computer Architecture.

Slides:



Advertisements
Similar presentations
PipelineCSCE430/830 Pipeline: Introduction CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Prof. Yifeng Zhu, U of Maine Fall,
Advertisements

Pipeline Computer Organization II 1 Hazards Situations that prevent starting the next instruction in the next cycle Structural hazards – A required resource.
Lecture Objectives: 1)Define pipelining 2)Calculate the speedup achieved by pipelining for a given number of instructions. 3)Define how pipelining improves.
CMPT 334 Computer Organization
Chapter 8. Pipelining.
Review: Pipelining. Pipelining Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer.
Pipelining I (1) Fall 2005 Lecture 18: Pipelining I.
Pipelining Hwanmo Sung CS147 Presentation Professor Sin-Min Lee.
CS252/Patterson Lec 1.1 1/17/01 Pipelining: Its Natural! Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer.
Chapter Six 1.
Instructor: Senior Lecturer SOE Dan Garcia CS 61C: Great Ideas in Computer Architecture Pipelining Hazards 1.
Computer Organization
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania Computer Organization Pipelined Processor Design 1.
1 Stalling  The easiest solution is to stall the pipeline  We could delay the AND instruction by introducing a one-cycle delay into the pipeline, sometimes.
Computer ArchitectureFall 2007 © October 22nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
Computer ArchitectureFall 2008 © October 6th, 2008 Majd F. Sakr CS-447– Computer Architecture.
1  1998 Morgan Kaufmann Publishers Chapter Six Enhancing Performance with Pipelining.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 17 - Pipelined.
Introduction to Pipelining Rabi Mahapatra Adapted from the lecture notes of Dr. John Kubiatowicz (UC Berkeley)
Lecture 15: Pipelining and Hazards CS 2011 Fall 2014, Dr. Rozier.
9.2 Pipelining Suppose we want to perform the combined multiply and add operations with a stream of numbers: A i * B i + C i for i =1,2,3,…,7.
Computer Science Education
University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell CS352H: Computer Systems Architecture Topic 8: MIPS Pipelined.
Computer Organization CS224 Fall 2012 Lesson 28. Pipelining Analogy  Pipelined laundry: overlapping execution l Parallelism improves performance §4.5.
Chapter 4 CSF 2009 The processor: Pipelining. Performance Issues Longest delay determines clock period – Critical path: load instruction – Instruction.
Chapter 4 The Processor CprE 381 Computer Organization and Assembly Level Programming, Fall 2012 Revised from original slides provided by MKP.
Analogy: Gotta Do Laundry
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
CMPE 421 Parallel Computer Architecture
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
ECE 232 L18.Pipeline.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 18 Pipelining.

CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
Pipelining Example Laundry Example: Three Stages
Instructor: Senior Lecturer SOE Dan Garcia CS 61C: Great Ideas in Computer Architecture Pipelining Hazards 1.
LECTURE 7 Pipelining. DATAPATH AND CONTROL We started with the single-cycle implementation, in which a single instruction is executed over a single cycle.
Introduction to Computer Organization Pipelining.
Lecture 9. MIPS Processor Design – Pipelined Processor Design #1 Prof. Taeweon Suh Computer Science Education Korea University 2010 R&E Computer System.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Lecture 3. Performance Prof. Taeweon Suh Computer Science & Engineering Korea University COSE222, COMP212, CYDF210 Computer Architecture.
Lecture 18: Pipelining I.
Computer Organization
Pipelines An overview of pipelining
Pipelining Chapter 6.
CSCI206 - Computer Organization & Programming
Pipelining concepts, datapath and hazards
Morgan Kaufmann Publishers
Single Clock Datapath With Control
Pipeline Implementation (4.6)
ECE232: Hardware Organization and Design
CDA 3101 Spring 2016 Introduction to Computer Organization
Chapter 3: Pipelining 순천향대학교 컴퓨터학부 이 상 정 Adapted from
Morgan Kaufmann Publishers The Processor
Chapter 4 The Processor Part 2
Pipelining review.
Pipelining Chapter 6.
Morgan Kaufmann Publishers Enhancing Performance with Pipelining
Lecturer: Alan Christopher
Pipelining in more detail
CSCI206 - Computer Organization & Programming
The Processor Lecture 3.6: Control Hazards
An Introduction to pipelining
Chapter 8. Pipelining.
CS203 – Advanced Computer Architecture
Pipelining Appendix A and Chapter 3.
Morgan Kaufmann Publishers The Processor
Pipelining Chapter 6.
Guest Lecturer: Justin Hsia
Presentation transcript:

Lecture 5. MIPS Processor Design Pipelined MIPS #1 Prof. Taeweon Suh Computer Science & Engineering Korea University COSE222, COMP212 Computer Architecture

Korea Univ Processor Performance Performance of single-cycle processor is limited by the long critical path delay  The critical path limits the operating clock frequency Can we do better?  New semiconductor technology will reduce the critical path delay by manufacturing with small-sized transistors Core 2 Duo: 65nm technology 1 st Gen. Core i7 (Nehalem): 45nm technology 2 nd Gen. Core i7 (Sandy Bridge): 32nm technology 3 rd, 4 th Gen. Core i7 (Ivy Bridge, Haswell): 22nm technology 5 th Gen. Core i7 (Broadwell): 14nm technology  Can we increase the processor performance with a different microarchitecture? Yes! Pipelining 2

Korea Univ Revisiting Performance 3 Laundry Example  Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold  Washer takes 30 minutes  Dryer takes 40 minutes  Folder takes 20 minutes ABCD

Korea Univ Sequential Laundry 4 Response time: Throughput: ABCD PM Midnight TaskOrderTaskOrder Time 90 mins 0.67 tasks / hr (= 90mins/task, 6 hours for 4 loads)

Korea Univ Pipelining Lessons 5 Pipelining doesn’t help latency (response time) of a single task Pipelining helps throughput of entire workload Multiple tasks operating simultaneously Unbalanced lengths of pipeline stages reduce speedup Potential speedup = # of pipeline stages ABCD 6 PM 789 TaskOrderTaskOrder Time mins 1.14 tasks / hr (= 52.5 mins/task, 3.5 hours for 4 loads) Response time: Throughput:

Korea Univ Pipelining Improve performance by increasing instruction throughput 6 Instruction Fetch Register File Access (Read) ALU Operation Data Access Register Access (Write) 2ns1ns2ns 1ns Sequential Execution Pipelined Execution

Korea Univ Pipelining (Cont.) 7 Multiple instructions are being executed simultaneously Pipeline Speedup If all stages are balanced (meaning that each stage takes the same amount of time) If not balanced, speedup is less Speedup comes from increased throughput (the latency of instruction does not decrease) = Time to execute an instruction sequential Number of stages Time to execute an instruction pipeline

Korea Univ Pipelining and ISA Design MIPS ISA is designed with pipelining in mind  All instructions are 32-bits (4 bytes) Compared with x86 (CISC): 1- to 17-byte instructions  Regular instruction formats Can decode and read registers in one cycle  Load/store addressing Calculate address in 3 rd stage Access memory in 4 th stage  Alignment of memory operands in memory Memory access takes only one cycle For example, 32-bit data (word) is aligned at word address  0x…0, 0x…4, 0x…8, 0x…C 8

Korea Univ Basic Idea 9 What should be done to implement pipelining (split into stages)?

Korea Univ Basic Idea 10 F/F clock

Korea Univ Graphically Representing Pipelines Shading indicates the unit is being used by the instruction Shading on the right half of the register file (ID or WB) or memory means the element is being read in that stage Shading on the left half means the element is being written in that stage 11 IFID MEM WB EX Time lw IFID MEM WB EX add

Korea Univ Hazards It would be happy if we split the datapath into stages and the CPU works just fine  But, things are not that simple as you may expect  There are hazards! Hazard is a situation that prevents starting the next instruction in the next cycle  Structure hazards Conflict over the use of a resource at the same time  Data hazard Data is not ready for the subsequent dependent instruction  Control hazard Fetching the next instruction depends on the previous branch outcome 12

Korea Univ Structure Hazards Structural hazard is a conflict over the use of a resource at the same time Suppose the MIPS CPU with 1 connection (1 port) to memory  Load/store requires data access in MEM stage  Instruction fetch requires instruction access from the same memory Instruction fetch would have to be stalled for that cycle Would cause a pipeline “bubble” Hence, pipelined datapaths require either separate ports to memory or separate memories for instruction and data 13 Memory MIPS CPU Address Bus Data Bus MIPS CPU Address Bus Data Bus Memory Address Bus Data Bus

Korea Univ Structure Hazards (Cont.) Time IFID MEM WB EX IFID MEM WB EX IFID MEM WB EX IFID MEM WB EX lw add sub add Either provide separate ports to access memory or provide instruction memory and data memory separately

Korea Univ Data Hazards Data is not ready for the subsequent dependent instruction 15 IFID MEM WB EX IFID MEM WB EX add $s0,$t0,$t1 Bubble sub $t2,$s0,$t3 Bubble To solve the data hazard problem, the pipeline needs to be stalled (typically referred to as “bubble”) Then, the performance is penalized A better solution? Forwarding (or Bypassing)

Korea Univ Forwarding 16 IFID MEM WB EX IF Bubble ID MEM WB EX add $s0,$t0,$t1 sub $t2,$s0,$t3

Korea Univ Data Hazard - Load-Use Case Can’t always avoid stalls by forwarding  Can’t forward backward in time! Hardware interlock is needed for the pipeline stall 17 IFID MEM WB EX IFID MEM WB EX lw $s0, 8($t1) Bubble sub $t2,$s0,$t3 This bubble can be hidden by proper instruction scheduling

Korea Univ Code Scheduling to Avoid Stalls Reorder code to avoid use of load result in the next instruction A = B + E; // B is loaded to $t1, E is loaded to $t2 C = B + F; // F is loaded to $t4 18 lw$t1, 0($t0) lw$t2, 4($t0) add$t3, $t1, $t2 sw$t3, 12($t0) lw$t4, 8($t0) add$t5, $t1, $t4 sw$t5, 16($t0) 13 cycles stall 11 cycles lw$t1, 0($t0) lw$t2, 4($t0) lw$t4, 8($t0) add$t3, $t1, $t2 sw$t3, 12($t0) add$t5, $t1, $t4 sw$t5, 16($t0)

Korea Univ Control Hazard Branch determines the flow of instructions Fetching the next instruction depends on the branch outcome  Pipeline can’t always fetch correct instruction  Branch instruction is still working on ID stage when fetching the next instruction 19 IFID MEM WB EX beq $1,$2,L1 Taken target address is known here IFID MEM WB EX Bubble add $1,$2,$3 sw $1, 4($2) L1: sub $1,$2, $3 IFID MEM WB EX IFID MEM WB EX Branch is resolved here Fetch the next instruction based on the comparison result …

Korea Univ Reducing Control Hazard To reduce 2 bubbles to 1 bubble, add hardware in ID stage to compare registers (and generate branch condition)  But, it requires additional forwarding and hazard detection logic – Why? 20 IFID MEM WB EX beq $1,$2,L1 Taken target address is known here IFID MEM WB EX Bubble add $1,$2,$3 L1: sub $1,$2, $3 IFID MEM WB EX Branch is resolved here Fetch instruction based on the comparison result …

Korea Univ Delayed Branch Many CPUs adopt a technique called the delayed branch to further reduce the stall  Delayed branch always executes the next sequential instruction The branch takes place after execution of the next instruction  Delay slot is the slot right after delayed branch instruction 21 IFID MEM WB EX beq $1,$2,L1 Taken target address is known here IFID MEM WB EX add $1,$2,$3 L1: sub $1,$2, $3 IFID MEM WB EX Branch is resolved here Fetch instruction based on the comparison result (delay slot) …

Korea Univ Delay Slot (Cont.) Compiler needs to schedule a useful instruction in the delay slot, or fills it up with nop (no operation) 22 add $s1,$s2, $s3 bne $t0,$zero, L1 nop //delay slot addi $t1, $t1, 1 L1: addi $t1, $t1, 2 bne $t0, $zero, L1 add $s1,$s2,$s3 // delay slot addi $t1, $t1, 1 L1: addi $t1, $t1, 2 // $s1 = a, $s2 = b, $3 = c // $t0 = d, $t1 = f a = b + c; if (d == 0) {f = f + 1;} f = f + 2; Can we do better? Fill the delay slot with a useful and valid instruction

Korea Univ Branch Prediction Longer pipelines (for example, Core 2 Duo has 14 stages) can’t readily determine branch outcome early  Stall penalty becomes unacceptable since branch instructions are used so frequently in the program Solution: Branch Prediction  Predict the branch outcome in hardware  Flush the instructions (that shouldn’t have been executed) in the pipeline if the prediction turns out to be wrong  Modern processors use sophisticated branch predictors Our MIPS implementation is like branches-not-taken predictor (with no delayed branch)  Fetch the next instruction after branch  If the prediction turns out to be wrong, flush out the instruction fetched 23

Korea Univ MIPS with Predict-Not-Taken 24 Prediction correct Prediction incorrect Flush the instruction that shouldn’t be executed

Korea Univ Pipeline Summary Pipelining improves performance by increasing instruction throughput  Executes multiple instructions in parallel Pipelining is subject to hazards  Structure hazard  Data hazard  Control hazard ISA affects the complexity of the pipeline implementation 25

Korea Univ Backup Slides 26

Korea Univ Past, Present, Future (Intel) 27 Source: