Pipelining. Overview Pipelining is widely used in modern processors. Pipelining improves system performance in terms of throughput. Pipelined organization.

Slides:



Advertisements
Similar presentations
Pipelining (Week 8).
Advertisements

1 Pipelining Part 2 CS Data Hazards Data hazards occur when the pipeline changes the order of read/write accesses to operands that differs from.
CMSC 611: Advanced Computer Architecture Pipelining Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
COMP25212 Further Pipeline Issues. Cray 1 COMP25212 Designed in 1976 Cost $8,800,000 8MB Main Memory Max performance 160 MFLOPS Weight 5.5 Tons Power.
Data Dependencies Describes the normal situation that the data that instructions use depend upon the data created by other instructions, or data is stored.
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
Pipeline Computer Organization II 1 Hazards Situations that prevent starting the next instruction in the next cycle Structural hazards – A required resource.
Lecture Objectives: 1)Define pipelining 2)Calculate the speedup achieved by pipelining for a given number of instructions. 3)Define how pipelining improves.
Pipeline Hazards Pipeline hazards These are situations that inhibit that the next instruction can be processed in the next stage of the pipeline. This.
Pipeline and Vector Processing (Chapter2 and Appendix A)
Chapter 8. Pipelining.
Chapter Six 1.
Chapter 8. Pipelining. Instruction Hazards Overview Whenever the stream of instructions supplied by the instruction fetch unit is interrupted, the pipeline.
S. Barua – CPSC 440 CHAPTER 6 ENHANCING PERFORMANCE WITH PIPELINING This chapter presents pipelining.
Computer Organization and Architecture The CPU Structure.
Chapter 12 Pipelining Strategies Performance Hazards.
Pipelining III Andreas Klappenecker CPSC321 Computer Architecture.
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
1 Chapter Six - 2nd Half Pipelined Processor Forwarding, Hazards, Branching EE3055 Web:
King Fahd University of Petroleum and Minerals King Fahd University of Petroleum and Minerals Computer Engineering Department Computer Engineering Department.
Pipelined Processor II CPSC 321 Andreas Klappenecker.
7/2/ _23 1 Pipelining ECE-445 Computer Organization Dr. Ron Hayne Electrical and Computer Engineering.
Appendix A Pipelining: Basic and Intermediate Concepts
ENGS 116 Lecture 51 Pipelining and Hazards Vincent H. Berk September 30, 2005 Reading for today: Chapter A.1 – A.3, article: Patterson&Ditzel Reading for.
1 Lecture 7: Static ILP and branch prediction Topics: static speculation and branch prediction (Appendix G, Section 2.3)
Group 5 Alain J. Percial Paula A. Ortiz Francis X. Ruiz.
Lecture 15: Pipelining and Hazards CS 2011 Fall 2014, Dr. Rozier.
Please see “portrait orientation” PowerPoint file for Chapter 8 Figure 8.1. Basic idea of instruction pipelining.
Pipeline Hazard CT101 – Computing Systems. Content Introduction to pipeline hazard Structural Hazard Data Hazard Control Hazard.
CS1104: Computer Organisation School of Computing National University of Singapore.
COMPUTER ARCHITECTURE Assoc.Prof. Stasys Maciulevičius Computer Dept.
1 Pipelining Reconsider the data path we just did Each instruction takes from 3 to 5 clock cycles However, there are parts of hardware that are idle many.
Pipelining. 10/19/ Outline 5 stage pipelining Structural and Data Hazards Forwarding Branch Schemes Exceptions and Interrupts Conclusion.
1 Appendix A Pipeline implementation Pipeline hazards, detection and forwarding Multiple-cycle operations MIPS R4000 CDA5155 Spring, 2007, Peir / University.
Chapter 8 Pipelining. A strategy for employing parallelism to achieve better performance Taking the “assembly line” approach to fetching and executing.
Comp Sci pipelining 1 Ch. 13 Pipelining. Comp Sci pipelining 2 Pipelining.
Pipeline Hazards. CS5513 Fall Pipeline Hazards Situations that prevent the next instructions in the instruction stream from executing during its.
CMPE 421 Parallel Computer Architecture
Topics covered: Pipelining CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Chapter 6 Pipelined CPU Design. Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelined operation – laundry analogy Text Fig. 6.1.
Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2010
1  1998 Morgan Kaufmann Publishers Chapter Six. 2  1998 Morgan Kaufmann Publishers Pipelining Improve perfomance by increasing instruction throughput.
Introduction  The speed of execution of program is influenced by many factors. i) One way is to build faster circuit technology to build the processor.
Pipelining Example Laundry Example: Three Stages
LECTURE 7 Pipelining. DATAPATH AND CONTROL We started with the single-cycle implementation, in which a single instruction is executed over a single cycle.
Lecture 9. MIPS Processor Design – Pipelined Processor Design #1 Prof. Taeweon Suh Computer Science Education Korea University 2010 R&E Computer System.
Real-World Pipelines Idea –Divide process into independent stages –Move objects through stages in sequence –At any given times, multiple objects being.
PipeliningPipelining Computer Architecture (Fall 2006)
DICCD Class-08. Parallel processing A parallel processing system is able to perform concurrent data processing to achieve faster execution time The system.
Real-World Pipelines Idea Divide process into independent stages
Chapter Six.
Advanced Architectures
Computer Architecture Chapter (14): Processor Structure and Function
Pipelining Chapter 6.
Pipeline Implementation (4.6)
Morgan Kaufmann Publishers The Processor
Superscalar Processors & VLIW Processors
Pipelining Chapter 6.
Chapter 8. Pipelining.
Chapter 8. Pipelining.
Chapter 8. Pipelining.
Chapter Six.
Chapter Six.
Control unit extension for data hazards
Chapter 8. Pipelining.
Control unit extension for data hazards
Pipelining Chapter 6.
Control unit extension for data hazards
Pipelining.
Presentation transcript:

Pipelining

Overview Pipelining is widely used in modern processors. Pipelining improves system performance in terms of throughput. Pipelined organization requires sophisticated compilation techniques.

Basic Concepts

Making the Execution of Programs Faster Use faster circuit technology to build the processor and the main memory. Arrange the hardware so that more than one operation can be performed at the same time. In the latter way, the number of operations performed per second is increased even though the elapsed time needed to perform any one operation is not changed.

Use the Idea of Pipelining in a Computer F 1 E 1 F 2 E 2 F 3 E 3 I 1 I 2 I 3 (a) Sequential execution Instruction fetch unit Execution unit Interstage buffer B1 (b) Hardware organization Time F 1 E 1 F 2 E 2 F 3 E 3 I 1 I 2 I 3 Instruction (c) Pipelined execution Figure 8.1. Basic idea of instruction pipelining. Clock cycle1234 Time Fetch + Execution

Use the Idea of Pipelining in a Computer Fetch + Decode + Execution + Write Textbook page: 457

Role of Cache Memory Each pipeline stage is expected to complete in one clock cycle. The clock period should be long enough to let the slowest pipeline stage to complete. Faster stages can only wait for the slowest one to complete. Since main memory is very slow compared to the execution, if each instruction needs to be fetched from main memory, pipeline is almost useless. Fortunately, we have cache.

Pipeline Performance The potential increase in performance resulting from pipelining is proportional to the number of pipeline stages. However, this increase would be achieved only if all pipeline stages require the same time to complete, and there is no interruption throughout program execution. Unfortunately, this is not true.

Pipeline Performance

The previous pipeline is said to have been stalled for two clock cycles. Any condition that causes a pipeline to stall is called a hazard. Data hazard – any condition in which either the source or the destination operands of an instruction are not available at the time expected in the pipeline. So some operation has to be delayed, and the pipeline stalls. Instruction (control) hazard – a delay in the availability of an instruction causes the pipeline to stall. Structural hazard – the situation when two instructions require the use of a given hardware resource at the same time.

Pipeline Performance Idle periods – stalls (bubbles) Instruction hazard

Pipeline Performance Load X(R1), R2 Structural hazard

Pipeline Performance Again, pipelining does not result in individual instructions being executed faster; rather, it is the throughput that increases. Throughput is measured by the rate at which instruction execution is completed. Pipeline stall causes degradation in pipeline performance. We need to identify all hazards that may cause the pipeline to stall and to find ways to minimize their impact.

Quiz Four instructions, the I2 takes two clock cycles for execution. Pls draw the figure for 4- stage pipeline, and figure out the total cycles needed for the four instructions to complete.

Data Hazards

We must ensure that the results obtained when instructions are executed in a pipelined processor are identical to those obtained when the same instructions are executed sequentially. Hazard occurs A ← 3 + A B ← 4 × A No hazard A ← 5 × C B ← 20 + C When two operations depend on each other, they must be executed sequentially in the correct order. Another example: Mul R2, R3, R4 Add R5, R4, R6

Data Hazards Figure 8.6. Pipeline stalled by data dependency between D 2 and W 1.

Operand Forwarding Instead of from the register file, the second instruction can get data directly from the output of ALU after the previous instruction is completed. A special arrangement needs to be made to “forward” the output of ALU to the input of ALU.

Handling Data Hazards in Software Let the compiler detect and handle the hazard: I1: Mul R2, R3, R4 NOP I2: Add R5, R4, R6 The compiler can reorder the instructions to perform some useful work during the NOP slots.

Side Effects The previous example is explicit and easily detected. Sometimes an instruction changes the contents of a register other than the one named as the destination. When a location other than one explicitly named in an instruction as a destination operand is affected, the instruction is said to have a side effect. (Example?) Example: conditional code flags: Add R1, R3 AddWithCarry R2, R4 Instructions designed for execution on pipelined hardware should have few side effects.

Instruction Hazards

Time lost as a result of branch instruction is referred as branch penalty Branch penalty is one clock cycle For longer pipeline, branch penalty may be higher Reducing branch penalty requires branch instruction to be computed earlier in pipeline

Instruction fetch unit should have dedicated hardware to identify branch instruction Compute branch target address as quickly as possible after instruction is fetched So above task in performed in decode stage Explaination of fig-8.8,8.9-a,8.9-b

Unconditional Branches

Branch Timing - Branch penalty - Reducing the penalty

Instruction Queue and Prefetching F : Fetch instruction E : Execute instruction W : Write results D : Dispatch/ Decode Instruction queue Instruction fetch unit Figure Use of an instruction queue in the hardware organization of Figure 8.2b. unit

To reduce the effect of cache miss or stall,processor employ sophisticated fetch unit Fetched instruction are placed in instruction queue before they are are needed instruction queue can store several instructions Dispatch unit takes instruction from front of the queue and send to execution unit Explaination pg nd para (exam must)

Branch instruction is executed concurrently with other instruction. This technique is called branch folding branch folding occurs only at the time a branch instruction is encountered,atleast one instruction is available in the queue other than branch instruction

Conditional Braches A conditional branch instruction introduces the added hazard caused by the dependency of the branch condition on the result of a preceding instruction. The decision to branch cannot be made until the execution of that instruction has been completed. Branch instructions represent about 20% of the dynamic instruction count of most programs.

Branch instruction can be handled in several ways to reduce branch penalty  Delayed branch  Branch prediction  Dynamic branch prediction

DELAYED BRANCH Location following a branch instruction is called branch delay slot There may be more than one branch delay slot depending on the time to execute a branch instruction A techcnique called delay branching can minimize the penalty incurred as a result of branch instruction

Arrange the instruction execution such that whether or not the branch is taken Place useful instruction in delay slot But no useful instruction can be placed in delay slot But slot can be filled with NOP instructions Effectiveness of delay branch approach depends on how often it is possible to reorder instructions

BRANCH PREDICTION In branch prediction, assume that branch will not take place and continue to fetch instruction in sequential address order Speculative execution means that instruction are executed before the processor is certain that they are in correct execution sequence Hence care must be taken no processor or register are updated until it is confirmed that these instruction should indeed be executed

Branch prediction is classified into 2 types 1.Static branch prediction Prediction is always the same every time a given instruction takes place 2.Dynamic branch prediction Prediction decision may change based depending on execution history

2 state algorithm

4 state algorithm