Download presentation
Presentation is loading. Please wait.
Published byAugustus McCormick Modified over 6 years ago
1
Pipelining Ms.T.Hamsapriya Assistant Professor ,Dept of CSE
PSG College of Technology Coimbatore
2
Introduction What is Pipelining? The Basic Pipeline
Pipeline Classification Pipeline Design Issues Hazards Structural Hazards Data Hazards Control Hazards THP
3
Parallel Processing Spatial Parallelism Temporal Parallelism
Array Processors Temporal Parallelism Pipelining THP
4
Pipelining Overlap the execution of multiple instructions.
At any time, there are multiple instructions being executed – each in a different stage. So much for sharing resources ?!? THP
5
Pipelined Computations
Problem divided into a series of tasks that have to be completed one after the other (the basis of sequential programming). Each task executed by a separate process or processor. THP 5.2
6
The Laundry Analogy Non-pipelined approach: Two loads? Start all over.
run 1 load of clothes through washer run load through dryer fold the clothes (optional step for students) put the clothes away (also optional). Two loads? Start all over. THP
7
What Is Pipelining A B C D Laundry Example
A, B, C, D each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer takes 30 minutes “Folder” takes 30 minutes THP
8
What Is Pipelining 6 PM 7 8 9 10 11 Midnight 30 30 30 30 30 30 30 30
Time 30 30 30 30 30 30 30 30 30 30 30 30 T a s k O r d e A B C D Sequential laundry takes 6 hours for 4 loads If they learned pipelining, how long would laundry take? THP
9
What Is Pipelining Start work ASAP
6 PM 7 8 9 10 11 Midnight Time 30 T a s k O r d e A Pipelined laundry takes 3 hours for 4 loads B C D THP
10
What Is Pipelining Pipelining Lessons 6 PM 7 8 9 30 A B C D
Pipelining doesn’t help latency of single task, it helps throughput of entire workload Pipeline rate limited by slowest pipeline stage Multiple tasks operating simultaneously Potential speedup = Number pipe stages Unbalanced lengths of pipe stages reduces speedup Time to “fill” pipeline and time to “drain” it reduces speedup 6 PM 7 8 9 Time T a s k O r d e 30 A B C D THP
11
Execution Time vs. Throughput
It still takes the same amount of time to get your favorite pair of socks clean, pipelining won’t help. Speedup = k.n.t (k+(n-1)).t THP
12
Pipeline Classification
Instruction Pipeline Arithmetic Pipeline Processor Pipeline Unifunctional Pipeline Multifunctional Pipeline Static pipeline Dynamic pipeline Scalar /Vector pipeline THP
13
Pipeline Issues Unequal time delay Dividing the longest delay stage
Series In Parallel Linear/ Nonlinear Pipeline THP
14
Instruction Pipelining
First break instruction execution into discrete stages: Instruction Fetch Instruction Decode/ Register Fetch ALU Operation Data Memory access Write result into register THP
15
DLX Without Pipelining
What Is Pipelining DLX Without Pipelining Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc Memory Access Write Back IR L M D THP
16
What Is Pipelining DLX Functions Instruction Fetch (IF):
Memory Access Write Back Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc IR L M D Passed To Next Stage IR <- Mem[PC] NPC <- PC + 4 Instruction Fetch (IF): Send out the PC and fetch the instruction from memory into the instruction register (IR); increment the PC by 4 to address the next sequential instruction. IR holds the instruction that will be used in the next stage. NPC holds the value of the next PC. THP
17
What Is Pipelining DLX Functions
Memory Access Write Back Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc IR L M D Passed To Next Stage A <- Regs[IR6..IR10]; B <- Regs[IR10..IR15]; Imm <- ((IR16) ##IR16-31 Instruction Decode/Register Fetch Cycle (ID): Decode the instruction and access the register file to read the registers. The outputs of the general purpose registers are read into two temporary registers (A & B) for use in later clock cycles. We extend the sign of the lower 16 bits of the Instruction Register. THP
18
What Is Pipelining DLX Functions Execute Address Calculation (EX):
Memory Access Write Back Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc IR L M D Passed To Next Stage A <- A func. B cond = 0; Execute Address Calculation (EX): We perform an operation (for an ALU) or an address calculation (if it’s a load or a Branch). If an ALU, actually do the operation. If an address calculation, figure out how to obtain the address and stash away the location of that address for the next cycle. THP
19
What Is Pipelining DLX Functions MEMORY ACCESS (MEM):
Write Back Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc IR L M D Passed To Next Stage A = Mem[prev. B] or Mem[prev. B] = A MEMORY ACCESS (MEM): If this is an ALU, do nothing. If a load or store, then access memory. THP
20
What Is Pipelining DLX Functions WRITE BACK (WB): Passed To Next Stage
Memory Access Write Back Instruction Fetch Instr. Decode Reg. Fetch Execute Addr. Calc IR L M D Passed To Next Stage Regs <- A, B; WRITE BACK (WB): Update the registers from either the ALU or from the data loaded. THP
21
The Basic Pipeline For DLX
Reg ALU DMem Ifetch Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 6 Cycle 7 Cycle 5 I n s t r. O r d e Figure 3.3 THP
22
Pipeline Hazard Something happens that means the next instruction cannot execute in the following clock cycle. Three kinds of hazards: structural hazard control hazard data hazard THP
23
Structural Hazards Two stages require the same resource.
What if we only had enough electricity to run either the washer or the dryer at any given time? What if MIPS datapath had only one memory unit instead of separate instruction and data memory? THP
24
Structural Hazards Load Instr 1 Instr 2 Instr 3 Instr 4
e Time (clock cycles) Load Instr 1 Instr 2 Instr 3 Instr 4 Reg ALU DMem Ifetch Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 6 Cycle 7 Cycle 5 MEM uses the same memory port as IF THP
25
Structural Hazards Load Instr 1 Instr 2 Stall Instr 3
e Time (clock cycles) Load Instr 1 Instr 2 Stall Instr 3 Reg ALU DMem Ifetch Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 6 Cycle 7 Cycle 5 Bubble This is another way of looking at the effect of a stall. THP
26
Structural Hazards THP
27
Avoiding Structural Hazards
Design the pipeline carefully. Might need to duplicate resources an Adder to update PC, and ALU to perform other operations. Detecting structural hazards at execution time (and delaying execution) is not something we want to do (structural hazards are minimized in the design phase). THP
28
Data Hazard One of the values needed by an instruction is not yet available (the instruction that computes it isn't done yet). This is like the CompOrg vs. Socks issue. This will cause a data hazard: add $t0,$s1,$s2 addi $t0,$t0,17 THP
29
Data Hazards A condition in which either the source or the destination operands of an instruction are not available at the expected time. THP
30
Data Hazards I: add r1,r2,r3 J: sub r4,r1,r3
Read After Write (RAW) InstrJ tries to read operand before InstrI writes it Caused by a “Dependence” (in compiler nomenclature). This hazard results from an actual need for communication. Execution Order is: InstrI InstrJ I: add r1,r2,r3 J: sub r4,r1,r3 THP
31
Data Hazards I: sub r4,r1,r3 J: add r1,r2,r3 K: mul r6,r1,r7
Write After Read (WAR) InstrJ tries to write operand before InstrI reads i Gets wrong operand Called an “anti-dependence” by compiler writers. This results from reuse of the name “r1”. Can’t happen in DLX 5 stage pipeline because: All instructions take 5 stages, and Reads are always in stage 2, and Writes are always in stage 5 Execution Order is: InstrI InstrJ I: sub r4,r1,r3 J: add r1,r2,r3 K: mul r6,r1,r7 THP
32
Data Hazards I: sub r1,r4,r3 J: add r1,r2,r3 K: mul r6,r1,r7
Write After Write (WAW) InstrJ tries to write operand before InstrI writes it Leaves wrong result ( InstrI not InstrJ ) Called an “output dependence” by compiler writers This also results from the reuse of name “r1”. Can’t happen in DLX 5 stage pipeline because: All instructions take 5 stages, and Writes are always in stage 5 Will see WAR and WAW in later more complicated pipes Execution Order is: InstrI InstrJ I: sub r1,r4,r3 J: add r1,r2,r3 K: mul r6,r1,r7 THP
33
Data Hazards Simple Solution to RAW Hardware detects RAW and stalls
Assumes register written then read each cycle + low cost to implement, simple -- reduces IPC Try to minimize stalls Minimizing RAW stalls Bypass/forward/shortcircuit (We will use the word “forward”) Use data before it is in the register + reduces/avoids stalls -- complex Crucial for common RAW hazards THP
34
Handling Data Hazards We can hope that the compiler can arrange instructions so that data hazards never appear. this doesn't work, as programs generally need to use previously computed values for everything! Some data hazards aren't real - the value needed is available, just not in the right place. THP
35
Data Hazards add r1,r2,r3 sub r4,r1,r3 and r6,r1,r7 or r8,r1,r9
I n s t r. O r d e add r1,r2,r3 sub r4,r1,r3 and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 Reg ALU DMem Ifetch Time (clock cycles) IF ID/RF EX MEM WB The use of the result of the ADD instruction in the next three instructions causes a hazard, since the register is not written until after those instructions read it. Figure 3.9 THP
36
Data Hazards add r1,r2,r3 sub r4,r1,r3 and r6,r1,r7 or r8,r1,r9
Forwarding is the concept of making data available to the input of the ALU for subsequent instructions, even though the generating instruction hasn’t gotten to WB in order to write the memory or registers. Forwarding To Avoid Data Hazard Time (clock cycles) add r1,r2,r3 sub r4,r1,r3 and r6,r1,r7 or r8,r1,r9 xor r10,r1,r11 Reg ALU DMem Ifetch THP
37
Data Hazards The data isn’t loaded until after the MEM stage.
Time (clock cycles) I n s t r. O r d e lw r1, 0(r2) sub r4,r1,r6 and r6,r1,r7 or r8,r1,r9 Reg ALU DMem Ifetch There are some instances where hazards occur, even with forwarding. THP
38
The stall is necessary as shown here.
Data Hazards The stall is necessary as shown here. Time (clock cycles) or r8,r1,r9 I n s t r. O r d e lw r1, 0(r2) sub r4,r1,r6 and r6,r1,r7 Reg ALU DMem Ifetch Bubble There are some instances where hazards occur, even with forwarding. Figure 3.13 THP
39
This is another representation of the stall.
Data Hazards This is another representation of the stall. LW R1, 0(R2) IF ID EX MEM WB SUB R4, R1, R5 AND R6, R1, R7 OR R8, R1, R9 LW R1, 0(R2) IF ID EX MEM WB SUB R4, R1, R5 stall AND R6, R1, R7 OR R8, R1, R9 THP
40
Forwarding It's possible to forward the value directly from one resource to another (in time). Hardware needs to detect (and handle) these situations automatically! This is difficult, but necessary. THP
41
Picture of Forwarding Figure 6.8 a d $ s , t 1 u b 2 3 P r o g m e x c
, t 1 u b 2 3 P r o g m e x c i n ( ) I F D W B E X M T 4 6 8 Figure 6.8 THP
42
Another Example Figure 6.9 THP T i m e 2 4 6 8 1 l w $ s , ( t ) u b 3
l w $ s , ( t ) u b 3 P r o g a x c n d I F D W B M E X Figure 6.9 THP
43
Pipelining and CPI If we keep the pipeline full, one instruction completes every cycle. Another way of saying this: the average time per instruction is 1 cycle. even though each instruction actually takes 5 cycles (5 stage pipeline). CPI=1 THP
44
Correctness Pipeline and compiler designers must be careful to ensure that the various schemes to avoid stalling do not change what the program does! only when and how it does it. It's impossible to test all possible combinations of instructions (to make sure the hardware does what is expected). It's impossible to test all combinations even without pipelining! THP
45
Control Hazards A control hazard occurs when we need to find the destination of a branch, and can’t fetch any new instructions until we know that destination. THP
46
Control Hazard on Branches Three Stage Stall
Control Hazards Control Hazard on Branches Three Stage Stall 10: beq r1,r3,36 14: and r2,r3,r5 18: or r6,r1,r7 22: add r8,r1,r9 36: xor r10,r1,r11 Reg ALU DMem Ifetch THP
47
Five Branch Hazard Alternatives
Control Hazards Five Branch Hazard Alternatives #1: Stall until branch direction is clear #2: Predict Branch Not Taken Execute successor instructions in sequence “Squash” instructions in pipeline if branch actually taken Advantage of late pipeline state update 47% DLX branches not taken on average PC+4 already calculated, so use it to get next instruction #3: Predict Branch Taken 53% DLX branches taken on average But haven’t calculated branch target address in DLX DLX still incurs 1 cycle branch penalty Other machines: branch target known before outcome THP
48
Five Branch Hazard Alternatives
Control Hazards Five Branch Hazard Alternatives #4: Execute Both Paths #5: Delayed Branch Define branch to take place AFTER a following instruction branch instruction sequential successor1 sequential successor sequential successorn branch target if taken 1 slot delay allows proper decision and branch target address in 5 stage pipeline DLX uses this Branch delay of length n THP
49
Control Hazards Delayed Branch
Where to get instructions to fill branch delay slot? Before branch instruction From the target address: only valuable when branch taken From fall through: only valuable when branch not taken Cancelling branches allow more slots to be filled Compiler effectiveness for single branch delay slot: Fills about 60% of branch delay slots About 80% of instructions executed in branch delay slots useful in computation About 50% (60% x 80%) of slots usefully filled Delayed Branch downside: 7-8 stage pipelines, multiple instructions issued per clock (superscalar) THP
50
Control Hazards When one instruction needs to make a decision based on the results of another instruction that has not yet finished. Example: conditional branch The instruction that is fed to the pipeline right after a beq depends on whether or not the branch is taken. THP
51
beq Control Hazard slt beq ???
a = b+c; if (x!=0) y++; ... slt beq ??? slt $t0,$s0,$s1 beq $t0,$zero,skip addi $s0,$s0,1 skip: lw $s3,0($t3) The instruction to follow the beq could be either the addi or the lw, it depends on the result of the beq instruction. THP
52
One possible solution - stall
We can include in the control unit the ability to stall (to keep new instructions from entering the pipeline until we know which one). Unfortunately conditional branches are very common operations, and this would slow things down considerably. THP
53
A Stall I n s t r u c i o f e h R g A L U D a T m b q $ 1 , 2 4 d 5 6 l w 3 ( ) 8 P x To achieve a 1 cycle stall (as shown above), we need to modify the implementation of the beq instruction so that the decision is made by the end of the second stage. THP
54
Another strategy Predict whether or not the branch will be taken.
Go ahead with the predicted instruction (feed it into the pipeline next). If your prediction is right, you don't lose any time. If your prediction is wrong, you need to undo some things and start the correct instruction THP
55
Predicting branch not taken
s t r u c i o f e h R g A L U D a T m b q $ 1 , 2 4 d 5 6 l w 3 ( ) P x 7 8 9 THP
56
Dynamic Branch Prediction
The idea is to build hardware that will come up with a prediction based on the past history of the specific branch instruction. Predict the branch will be taken if it has been taken more often than not in the recent past. This works great for loops! (90% + correct). THP
57
Yet another strategy: delayed branch
The compiler rearranges instructions so that the branch actually occurs delayed by one instruction. This gives the hardware time to compute the address of the next instruction. The new instruction is hopefully useful whether or not the branch is taken (this is tricky - compilers must be careful!). THP
58
Delayed Branch Order reversed! beq add
a = b+c; if (x!=0) y++; ... Order reversed! beq add add $s2,$s3,$s4 beq $t0,$zero,skip addi $s0,$s0,1 skip: lw $s3,0($t3) The compiler must generate code that differs from what you would expect. THP
59
Evaluating Branch Alternatives
Control Hazards Evaluating Branch Alternatives Scheduling Branch CPI speedup v Speedup v. scheme penalty unpipelined stall Stall pipeline Predict taken Predict not taken Delayed branch Conditional & Unconditional = 14%, % change PC THP
60
Pipelining Introduction Summary
Control Hazards Pipelining Introduction Summary Just overlap tasks, and easy if tasks are independent Speed Up Š Pipeline Depth; if ideal CPI is 1, then: Hazards limit performance on computers: Structural: need more HW resources Data (RAW,WAR,WAW): need forwarding, compiler scheduling Control: delayed branch, prediction Pipeline Depth Clock Cycle Unpipelined Speedup = X 1 + Pipeline stall CPI Clock Cycle Pipelined THP
61
Compiler “Static” Prediction of Taken/Untaken Branches
Control Hazards Compiler “Static” Prediction of Taken/Untaken Branches Improves strategy for placing instructions in delay slot Two strategies Backward branch predict taken, forward branch not taken Profile-based prediction: record branch behavior, predict branch based on prior run THP
62
Evaluating Static Branch Prediction Strategies
Control Hazards Evaluating Static Branch Prediction Strategies Misprediction ignores frequency of branch “Instructions between mispredicted branches” is a better metric THP
63
What Makes Pipelining Hard?
Interrupts cause great havoc! There are 5 instructions executing in 5 stage pipeline when an interrupt occurs: How to stop the pipeline? How to restart the pipeline? Who caused the interrupt? Examples of interrupts: Power failing, Arithmetic overflow, I/O device request, OS call, Page fault Interrupts (also known as: faults, exceptions, traps) often require surprise jump (to vectored address) linking return address saving of PSW (including CCs) state change (e.g., to kernel mode) THP
64
What Makes Pipelining Hard?
What happens on interrupt while in delay slot ? Next instruction is not sequential solution #1: save multiple PCs Save current and next PC Special return sequence, more complex hardware solution #2: single PC plus Branch delay bit PC points to branch instruction Stage Problem that causes the interrupt IF Page fault on instruction fetch; misaligned memory access; memory-protection violation ID Undefined or illegal opcode EX Arithmetic interrupt MEM Page fault on data fetch; misaligned memory access; memory-protection violation Interrupts cause great havoc! THP
65
What Makes Pipelining Hard?
Interrupts cause great havoc! Simultaneous exceptions in more than one pipeline stage, e.g., Load with data page fault in MEM stage Add with instruction page fault in IF stage Add fault will happen BEFORE load fault Solution #1 Interrupt status vector per instruction Defer check until last stage, kill state update if exception Solution #2 Interrupt ASAP Restart everything that is incomplete Another advantage for state update late in pipeline! THP
66
What Makes Pipelining Hard?
Interrupts cause great havoc! Here’s what happens on a data page fault. i F D X M W i F D X M W < page fault i F D X M W < squash i F D X M W < squash i F D X M W < squash i trap > F D X M W i trap handler > F D X M W THP
67
What Makes Pipelining Hard?
Complex Instructions Complex Addressing Modes and Instructions Address modes: Autoincrement causes register change during instruction execution Interrupts? Need to restore register state Adds WAR and WAW hazards since writes are no longer the last stage. Memory-Memory Move Instructions Must be able to handle multiple page faults Long-lived instructions: partial state save on interrupt Condition Codes THP
68
Handling Multi-cycle Operations
Multi-cycle instructions also lead to pipeline complexity. A very lengthy instruction causes everything else in the pipeline to wait for it. THP
69
Multi-Cycle Operations
Floating Point Floating point gives long execution time. This causes a stall of the pipeline. It’s possible to pipeline the FP execution unit so it can initiate new instructions without waiting full latency. Can also have multiple FP units. FP Instruction Latency Initiation Rate Add, Subtract 4 3 Multiply 8 4 Divide 36 35 Square root Negate 2 1 Absolute value 2 1 FP compare 3 2 THP
70
Multi-Cycle Operations
Floating Point Divide, Square Root take 10X to 30X longer than Add Interrupts? Adds WAR and WAW hazards since pipelines are no longer same length Notes: I + 2: no WAW, but this complicates an interrupt I + 4: no WB conflict I + 5: stall forced by structural hazard I + 6: stall forced by in-order issue THP
71
Summary of Pipelining Basics
Hazards limit performance Structural: need more HW resources Data: need forwarding, compiler scheduling Control: early evaluation & PC, delayed branch, prediction Increasing length of pipe increases impact of hazards; pipelining helps instruction bandwidth, not latency Interrupts, Instruction Set, FP makes pipelining harder Compilers reduce cost of data and control hazards Load delay slots Branch delay slots Branch prediction THP
72
Instruction Set Design
What makes it difficult to “marry” the instruction set to the pipeline implementation? THP
73
Instruction Set Design
By definition, a pipeline works best when all the instructions in the pipeline have the same characteristics. Variable instruction lengths and instructions requiring various times, both lead to complications in a pipeline. Too-complicated addressing modes make it difficult to calculate the target address in a timely fashion. Modes that update registers (such as post-auto increment) make hazard detection difficult. Self modifying code is difficult to handle in a pipeline (the 80x86 allows this!!) Non-standard methods of handling condition codes makes it difficult to set and then test these codes. Best case has standard ways of modifying these condition codes. THP
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.