CMPT 334 Computer Organization

Slides:



Advertisements
Similar presentations
PipelineCSCE430/830 Pipeline: Introduction CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Prof. Yifeng Zhu, U of Maine Fall,
Advertisements

Lecture 4: CPU Performance
CS1104: Computer Organisation School of Computing National University of Singapore.
CIS 314 Fall 2005 MIPS Datapath (Single Cycle and Multi-Cycle)
Lecture Objectives: 1)Define pipelining 2)Calculate the speedup achieved by pipelining for a given number of instructions. 3)Define how pipelining improves.
Pipelining I (1) Fall 2005 Lecture 18: Pipelining I.
Chapter Six 1.
Pipelined Datapath and Control (Lecture #13) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania Computer Organization Pipelined Processor Design 1.
Chapter Six Enhancing Performance with Pipelining
Pipelining Andreas Klappenecker CPSC321 Computer Architecture.
CSCE 212 Quiz 9 – 3/30/11 1.What is the clock cycle time based on for single-cycle and for pipelining? 2.What two actions can be done to resolve data hazards?
1 Atanasoff–Berry Computer, built by Professor John Vincent Atanasoff and grad student Clifford Berry in the basement of the physics building at Iowa State.
Appendix A Pipelining: Basic and Intermediate Concepts
Computer ArchitectureFall 2008 © October 6th, 2008 Majd F. Sakr CS-447– Computer Architecture.
Introduction to Pipelining Rabi Mahapatra Adapted from the lecture notes of Dr. John Kubiatowicz (UC Berkeley)
9.2 Pipelining Suppose we want to perform the combined multiply and add operations with a stream of numbers: A i * B i + C i for i =1,2,3,…,7.
CS1104: Computer Organisation School of Computing National University of Singapore.
Lecture 8: Processors, Introduction EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014,
Lecture 14: Processors CS 2011 Fall 2014, Dr. Rozier.
Pipeline Computer Organization II 1 Pipelining Analogy Pipelined laundry: overlapping execution – Parallelism improves performance Four loads: – Speedup.
University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell CS352H: Computer Systems Architecture Topic 8: MIPS Pipelined.
Computer Organization CS224 Fall 2012 Lesson 28. Pipelining Analogy  Pipelined laundry: overlapping execution l Parallelism improves performance §4.5.
Morgan Kaufmann Publishers
Pipelining (I). Pipelining Example  Laundry Example  Four students have one load of clothes each to wash, dry, fold, and put away  Washer takes 30.
Chapter 4 CSF 2009 The processor: Pipelining. Performance Issues Longest delay determines clock period – Critical path: load instruction – Instruction.
Comp Sci pipelining 1 Ch. 13 Pipelining. Comp Sci pipelining 2 Pipelining.
Chapter 4 The Processor. Chapter 4 — The Processor — 2 Introduction We will examine two MIPS implementations A simplified version A more realistic pipelined.
Chapter 4 The Processor CprE 381 Computer Organization and Assembly Level Programming, Fall 2012 Revised from original slides provided by MKP.
Analogy: Gotta Do Laundry
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
CSCI 6307 Foundation of Systems Review: Midterm Exam Xiang Lian The University of Texas – Pan American Edinburg, TX 78539
CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
Pipelining Example Laundry Example: Three Stages
CS61C L20 Datapath © UC Regents 1 Microprocessor James Tan Adapted from D. Patterson’s CS61C Copyright 2000.
Instructor: Senior Lecturer SOE Dan Garcia CS 61C: Great Ideas in Computer Architecture Pipelining Hazards 1.
LECTURE 7 Pipelining. DATAPATH AND CONTROL We started with the single-cycle implementation, in which a single instruction is executed over a single cycle.
11 Pipelining Kosarev Nikolay MIPT Oct, Pipelining Implementation technique whereby multiple instructions are overlapped in execution Each pipeline.
10/11: Lecture Topics Execution cycle Introduction to pipelining
Introduction to Computer Organization Pipelining.
Lecture 9. MIPS Processor Design – Pipelined Processor Design #1 Prof. Taeweon Suh Computer Science Education Korea University 2010 R&E Computer System.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Lecture 18: Pipelining I.
Computer Organization
Pipelines An overview of pipelining
Pipelining Chapter 6.
CSCI206 - Computer Organization & Programming
Morgan Kaufmann Publishers
Performance of Single-cycle Design
ELEN 468 Advanced Logic Design
Morgan Kaufmann Publishers The Processor
Single Clock Datapath With Control
Pipeline Implementation (4.6)
CDA 3101 Spring 2016 Introduction to Computer Organization
Chapter 4 The Processor Part 2
Pipelining Chapter 6.
Lecturer: Alan Christopher
Serial versus Pipelined Execution
CSCI206 - Computer Organization & Programming
Pipelining Chapter 6.
An Introduction to pipelining
Guest Lecturer TA: Shreyas Chand
Pipelining: Basic Concepts
Pipelining Appendix A and Chapter 3.
Morgan Kaufmann Publishers The Processor
A relevant question Assuming you’ve got: One washer (takes 30 minutes)
Pipelining.
Presentation transcript:

CMPT 334 Computer Organization Morgan Kaufmann Publishers April 11, 2017 CMPT 334 Computer Organization Chapter 4 The Processor (Pipelining) [Adapted from Computer Organization and Design 5th Edition, Patterson & Hennessy, © 2014, MK] Chapter 1 — Computer Abstractions and Technology

Improving Performance Ultimate goal: improve system performance One idea: pipeline the CPU Pipelining is a technique in which multiple instructions are overlapped in execution. It relies on the fact that the various parts of the CPU aren’t all used at the same time Let’s look at an analogy

Morgan Kaufmann Publishers 11 April, 2017 Sequential Laundry Four roommates need to do laundry How long to do laundry sequentially? Washer, dryer, “folder”, “storer” each take 30 minutes Total time: 8 hours for four loads Chapter 4 — The Processor

Pipelined Laundry How long to do if can overlap tasks? Only 3.5 hours!

Pipelining Notes Pipelining doesn’t help latency of single task, it helps throughput of entire workload How many instructions can we execute per second? Potential speedup = number of stages

MIPS Pipeline Five stages, one step per stage IF: Instruction fetch from memory ID: Instruction decode & register read EX: Execute operation or calculate address MEM: Access memory operand WB: Write result back to register

Stages of the Datapath Stage 1: Instruction Fetch No matter what the instruction, the 32-bit instruction word must first be fetched from memory Every time we fetch an instruction, we also increment the PC to prepare it for the next instruction fetch PC = PC + 4, to point to the next instruction

Stages of the Datapath Stage 2: Instruction Decode First, read the opcode to determine instruction type and field lengths Second, read in data from all necessary registers For add, read two registers For addi, read one register For jal, no register read necessary

Stages of the Datapath Stage 3: Execution Uses the ALU The real work of most instructions is done here: arithmetic, logic, etc. What about loads and stores – e.g., lw $t0, 40($t1) Address we are accessing in memory is 40 + contents of $t1 We can use the ALU to do this addition in this stage

Stages of the Datapath Stage 4: Memory Access Stage 5: Register Write Only the load and store instructions do anything during this stage; the others remain idle Stage 5: Register Write Most instructions write the result of some computation into a register Examples: arithmetic, logical, shifts, loads, slt What about stores, branches, jumps? Don’t write anything into a register at the end These remain idle during this fifth stage

MIPS Pipeline Five stages, one step per stage IF: Instruction fetch from memory ID: Instruction decode & register read EX: Execute operation or calculate address MEM: Access memory operand WB: Write result back to register

Datapath Walkthrough: LW, SW lw $s3, 17($s1) Stage 1: fetch this instruction, increment PC Stage 2: decode to find it’s a lw, then read register $s1 Stage 3: add 17 to value in register $s1 (retrieved in Stage 2) Stage 4: read value from memory address compute in Stage 3 Stage 5: write value read in Stage 4 into register $s3 sw $s3, 17($s1) Stage 2: decode to find it’s a sw, then read registers $s1 and $s3 Stage 3: add 17 to value in register $1 (retrieved in Stage 2) Stage 4: write value in register $3 (retrieved in Stage 2) into memory address computed in Stage 3 Stage 5: go idle (nothing to write into a register)

Datapath Walkthrough: SLTI, ADD slti $s3,$s1,17 Stage 1: fetch this instruction, increment PC Stage 2: decode to find it’s an slti, then read register $s1 Stage 3: compare value retrieved in Stage 2 with the integer 17 Stage 4: go idle Stage 5: write the result of Stage s3 in register $s3 add $s3,$s1,$s2 Stage 2: decode to find it’s an add, then read registers $s1 and $s2 Stage 3: add the two values retrieved in Stage 2 Stage 4: idle (nothing to write to memory) Stage 5: write result of Stage 3 into register $s3

Morgan Kaufmann Publishers 11 April, 2017 Pipeline Performance Assume time for stages is 100ps for register read or write 200ps for other stages Compare pipelined datapath with single-cycle datapath Instr Instr fetch Register read ALU op Memory access Register write Total time lw 200ps 100 ps 800ps sw 700ps R-format 600ps beq 500ps Chapter 4 — The Processor

Morgan Kaufmann Publishers 11 April, 2017 Pipeline Performance Single-cycle (Tc= 800ps) Pipelined (Tc= 200ps) Chapter 4 — The Processor

Morgan Kaufmann Publishers 11 April, 2017 Pipeline Speedup If all stages are balanced i.e., all take the same time Time between instructionspipelined = Time between instructionsnonpipelined Number of stages If not balanced, speedup is less Chapter 4 — The Processor

Limits to Pipelining: Hazards Morgan Kaufmann Publishers 11 April, 2017 Limits to Pipelining: Hazards Situations that prevent starting the next instruction in the next cycle Structure hazards A required resource is busy Data hazard Need to wait for previous instruction to complete its data read/write Control hazard Deciding on control action depends on previous instruction Chapter 4 — The Processor

Morgan Kaufmann Publishers 11 April, 2017 Data Hazards An instruction depends on completion of data access by a previous instruction add $s0, $t0, $t1 sub $t2, $s0, $t3 stall the pipeline Chapter 4 — The Processor

Exercise 4.8 IF ID EX MEM WB 250ps 350ps 150ps 300ps 200ps R-type beq lw sw 45% 20% 15% What is the clock cycle time in a pipelined and non-pipelined processor? Pipelined Single-cycle 350 ps 1250 ps

Exercise 4.8 IF ID EX MEM WB 250ps 350ps 150ps 300ps 200ps R-type beq lw sw 45% 20% 15% What is the total latency of an lw instruction in a pipelined and non-pipelined processor? Pipelined Single-cycle 1250 ps 1250 ps

Exercise 4.8 IF ID EX MEM WB 250ps 350ps 150ps 300ps 200ps R-type beq lw sw 45% 20% 15% What is the total latency of an lw instruction in a pipelined and non-pipelined processor? Pipelined Single-cycle 1250 ps 1250 ps

Exercise 4.8 What is the utilization of the data memory? 35% IF ID EX WB 250ps 350ps 150ps 300ps 200ps R-type beq lw sw 45% 20% 15% What is the utilization of the data memory? 35%

Exercise 4.8 IF ID EX MEM WB 250ps 350ps 150ps 300ps 200ps R-type beq lw sw 45% 20% 15% What is the utilization of the write-register port of the “Registers” unit? 65%