Download presentation
Presentation is loading. Please wait.
1
Lecture 06: Basic MIPS Pipelining Review
[Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, UCB] Slides are made by Mary Jane Irwin ( ) Other handouts Course schedule with due dates To handout next time HW#1 Combinations to AV system, etc (1988 in 113 IST) Call AV hot line at
2
Review: Single Cycle vs. Multiple Cycle Timing
Clk Single Cycle Implementation: lw sw Waste Cycle 1 Cycle 2 multicycle clock slower than 1/5th of single cycle clock due to stage register overhead Clk Cycle 1 Multiple Cycle Implementation: IFetch Dec Exec Mem WB Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10 lw sw R-type Here are the timing diagrams showing the differences between the single cycle and multiple cycle. In the multiple clock cycle implementation, we cannot start executing the store until Cycle 6 because we must wait for the load instruction to complete. Similarly, we cannot start the execution of the R-type instruction until the store instruction has completed its execution in Cycle 9. In the Single Cycle implementation, the cycle time is set to accommodate the longest instruction, the Load instruction. Consequently, the cycle time for the Single Cycle implementation can be five times longer than the multiple cycle implementation.
3
How Can We Make It Even Faster?
Split the multiple instruction cycle into smaller and smaller steps There is a point of diminishing returns where as much time is spent loading the state registers as doing the work Start fetching and executing the next instruction before the current one has completed Pipelining – (all?) modern processors are pipelined for performance Remember the performance equation: CPU time = CPI * CC * IC Fetch (and execute) more than one instruction at a time Superscalar processing – stay tuned
4
Intro to pipelining
5
Graphically Representing Pipelines
เลขที่ ชื่อ Graphically Representing Pipelines Can help with answering questions like: how many cycles does it take to execute this code? what is the ALU doing during cycle 4? use this representation to help understand datapaths -Read reg in last half cycle -Write reg in first half cycle
6
Pipelining Improve performance by increasing instruction throughput
เลขที่ ชื่อ Pipelining Improve performance by increasing instruction throughput Ideal speedup is number of stages in the pipeline. Do we achieve this? Single cycle Pipeline
7
Five-stage pipeline should have 5 times improvement in performance.
We don’t use ALU. Five-stage pipeline should have 5 times improvement in performance. However, stages may be imbalance. Pipelining has overhead between stages. If cycle time = 1.6 ns. Pipelining has overhead 20% so for each stage =( 20% x1.6/5=0.38) ns. Compare total execution time for the three previous instructions for pipeline version and non-pipeline version (single cycle). (0.38x3=1.15 ns VS 1.6ns x 3=4.8ns) What if these three instructions are repeated for 1,000 time, What is the ratio of times for non-pipeline version (single cycle) VS pipeline version? (3004x0.38ns = ns VS 3000x1.6ns=4800ns) (speedup =4800/ = 4.2 times
8
A Pipelined MIPS Processor
Start the next instruction before the current one has completed improves throughput - total amount of work done in a given time instruction latency (execution time, delay time, response time - time from the start of an instruction to its completion) is not reduced Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 lw IFetch Dec Exec Mem WB sw IFetch Dec Exec Mem WB Latency = execution time (delay or response time) – the total time from start to finish of ONE instruction For processors one important measure is THROUGHPUT (or the execution bandwidth) – the total amount of work done in a given amount of time For memories one important measure is BANDWIDTH – the amount of information communicated across an interconnect (e.g., bus) per unit time; the number of operations performed per second (the WIDTH of the operation and the RATE of the operation) R-type IFetch Dec Exec Mem WB clock cycle (pipeline stage time) is limited by the slowest stage for some instructions, some stages are wasted cycles
9
Single Cycle, Multiple Cycle, vs. Pipeline
Clk Single Cycle Implementation: lw sw Waste Cycle 1 Cycle 2 Multiple Cycle Implementation: Clk Cycle 1 IFetch Dec Exec Mem WB Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10 lw sw R-type Here are the timing diagrams showing the differences between the single cycle, multiple cycle, and pipeline implementations. For example, in the pipeline implementation, we can finish executing the Load, Store, and R-type instruction sequence in seven cycles. lw IFetch Dec Exec Mem WB Pipeline Implementation: sw R-type
10
MIPS Pipeline Datapath Modifications
What do we need to add/modify in our MIPS datapath? State registers between each pipeline stage to isolate them Read Address Instruction Memory Add PC 4 Write Data Read Addr 1 Read Addr 2 Write Addr Register File Data 1 Data 2 16 32 ALU Shift left 2 Data IFetch/Dec Dec/Exec Exec/Mem Mem/WB IF:IFetch ID:Dec EX:Execute MEM: MemAccess WB: WriteBack System Clock Sign Extend -How many IRs do we need in 5-stage pipeline? Note two exceptions to right-to-left flow WB that writes the result back into the register file in the middle of the datapath Selection of the next value of the PC, one input comes from the calculated branch address from the MEM stage Only later instructions in the pipeline can be influenced by these two REVERSE data movements. The first one (WB to ID) leads to data hazards. The second one (MEM to IF) leads to control hazards. All instructions must update some state in the processor – the register file, the memory, or the PC – so separate pipeline registers are redundant to the state that is updated (not needed). PC can be thought of as a pipeline register: the one that feeds the IF stage of the pipeline. Unlike all of the other pipeline registers, the PC is part of the visible architecture state – its content must be saved when an exception occurs (the contents of the other pipe registers are discarded).
11
Pipelining the MIPS ISA
What makes it easy for pipeline design all instructions are the same length (32 bits) can fetch in the 1st stage and decode in the 2nd stage few instruction formats (three) with symmetry across formats can begin reading register file in 2nd stage memory operations can occur only in loads and stores can use the execute stage to calculate memory addresses each MIPS instruction writes at most one result (i.e., changes the machine state) and does so near the end of the pipeline (MEM and WB) What makes it hard structural hazards: what if we had only one memory? control hazards: what about branches? data hazards: what if an instruction’s input operands depend on the output of a previous instruction?
12
Why Pipeline? For Performance!
Time (clock cycles) Once the pipeline is full, one instruction is completed every cycle, so CPI = 1 ALU IM Reg DM Inst 0 I n s t r. O r d e ALU IM Reg DM Inst 1 ALU IM Reg DM Inst 2 ALU IM Reg DM Inst 3 ALU IM Reg DM Inst 4 Time to fill the pipeline
13
Can Pipelining Get Us Into Trouble?
Pipeline Hazards structural hazards: attempt to use the same resource by two different instructions at the same time data hazards: attempt to use data before it is ready An instruction’s source operand(s) are produced by a prior instruction still in the pipeline control hazards: attempt to make a decision about program control flow before the condition has been evaluated and the new PC target address calculated branch instructions Note that data hazards can come from R-type instructions or lw instructions Can always resolve hazards by waiting pipeline control must detect the hazard and take action to resolve hazards
14
A Single Memory Would Be a Structural Hazard
Time (clock cycles) Reading data from memory ALU Mem Reg lw I n s t r. O r d e ALU Mem Reg Inst 1 ALU Mem Reg Inst 2 ALU Mem Reg Inst 3 Reading instruction from memory ALU Mem Reg Inst 4 Fix with separate instr and data memories (I$ and D$)
15
How About Register File Access?
Time (clock cycles) ALU IM Reg DM add $1, I n s t r. O r d e ALU IM Reg DM Inst 1 ALU IM Reg DM Inst 2 ALU IM Reg DM add $2,$1, For class handout
16
How About Register File Access?
Time (clock cycles) Fix register file access hazard by doing reads in the second half of the cycle and writes in the first half ALU IM Reg DM add $1, I n s t r. O r d e ALU IM Reg DM Inst 1 ALU IM Reg DM Inst 2 ALU IM Reg DM add $2,$1, For lecture Define register reads to occur in the second half of the cycle and register writes in the first half clock edge that controls loading of pipeline state registers clock edge that controls register writing
17
Register Usage Can Cause Data Hazards
Dependencies backward in time cause hazards ALU IM Reg DM add $1, I n s t r. O r d e ALU IM Reg DM sub $4,$1,$5 ALU IM Reg DM and $6,$1,$7 ALU IM Reg DM or $8,$1,$9 For class handout ALU IM Reg DM xor $4,$1,$5 Read before write data hazard
18
Register Usage Can Cause Data Hazards
Dependencies backward in time cause hazards ALU IM Reg DM add $1, ALU IM Reg DM sub $4,$1,$5 ALU IM Reg DM and $6,$1,$7 ALU IM Reg DM or $8,$1,$9 For lecture ALU IM Reg DM xor $4,$1,$5 Read before write data hazard
19
Loads Can Cause Data Hazards
Dependencies backward in time cause hazards ALU IM Reg DM lw $1,4($2) I n s t r. O r d e ALU IM Reg DM sub $4,$1,$5 ALU IM Reg DM and $6,$1,$7 ALU IM Reg DM or $8,$1,$9 Note that lw is just another example of register usage (beyond ALU ops) ALU IM Reg DM xor $4,$1,$5 Load-use data hazard
20
One Way to “Fix” a Data Hazard
Can fix data hazard by waiting – stall – but impacts CPI ALU IM Reg DM add $1, I n s t r. O r d e stall stall sub $4,$1,$5 and $6,$1,$7 ALU IM Reg DM
21
Another Way to “Fix” a Data Hazard
Fix data hazards by forwarding results as soon as they are available to where they are needed ALU IM Reg DM add $1, I n s t r. O r d e ALU IM Reg DM sub $4,$1,$5 ALU IM Reg DM and $6,$1,$7 ALU IM Reg DM or $8,$1,$9 For class handout ALU IM Reg DM xor $4,$1,$5
22
Another Way to “Fix” a Data Hazard
Fix data hazards by forwarding results as soon as they are available to where they are needed ALU IM Reg DM add $1, I n s t r. O r d e ALU IM Reg DM sub $4,$1,$5 ALU IM Reg DM and $6,$1,$7 ALU IM Reg DM or $8,$1,$9 For lecture Forwarding paths are valid only if the destination stage is later in time than the source stage. Forwarding is harder if there are multiple results to forward per instruction or if they need to write a result early in the pipeline ALU IM Reg DM xor $4,$1,$5
23
Forwarding with Load-use Data Hazards
ALU IM Reg DM lw $1,4($2) I n s t r. O r d e ALU IM Reg DM sub $4,$1,$5 ALU IM Reg DM and $6,$1,$7 ALU IM Reg DM or $8,$1,$9 For class handout ALU IM Reg DM xor $4,$1,$5
24
Forwarding with Load-use Data Hazards
ALU IM Reg DM lw $1,4($2) I n s t r. O r d e ALU IM Reg DM sub $4,$1,$5 ALU IM Reg DM and $6,$1,$7 ALU IM Reg DM or $8,$1,$9 For lecture Note that lw is just another example of register usage (beyond ALU ops) Need to stall even with forwarding when data hazard involves a load ALU IM Reg DM xor $4,$1,$5 Will still need one stall cycle even with forwarding
25
Branch Instructions Cause Control Hazards
Dependencies backward in time cause hazards beq ALU IM Reg DM I n s t r. O r d e ALU IM Reg DM lw ALU IM Reg DM Inst 3 ALU IM Reg DM Inst 4
26
One Way to “Fix” a Control Hazard
Fix branch hazard by waiting – stall – but affects CPI ALU IM Reg DM beq I n s t r. O r d e stall stall stall lw ALU IM Reg DM Inst 3 Another “solution” is to put in enough extra hardware so that we can test registers, calculate the branch address, and update the PC during the second stage of the pipeline. That would reduce the number of stalls to only one. A third approach is to prediction to handle branches, e.g., always predict that branches will be untaken. When right, the pipeline proceeds at full speed. When wrong, have to stall (and make sure nothing completes – changes machine state – that shouldn’t have). Will talk about these options in more detail in next,next lecture.
27
Corrected Datapath to Save RegWrite Addr
Need to preserve the destination register address in the pipeline state registers Read Address Instruction Memory Add PC 4 Write Data Read Addr 1 Read Addr 2 Write Addr Register File Data 1 Data 2 16 32 ALU Shift left 2 Data IF/ID Sign Extend ID/EX EX/MEM MEM/WB For class handout
28
Corrected Datapath to Save RegWrite Addr
Need to preserve the destination register address in the pipeline state registers What are these purple lines for? Read Address Instruction Memory Add PC 4 Write Data Read Addr 1 Read Addr 2 Write Addr Register File Data 1 Data 2 16 32 ALU Shift left 2 Data IF/ID Sign Extend ID/EX EX/MEM MEM/WB For lecture
29
Stall on Branch : In terms of time
Stall cycles
30
Stall on Branch With prediction (not taken): predict that it will not branch. Predict that taken : predict that it will branch (more natural, why?)
31
Stall on Branch With branch delay as solution
Insert useful instruction during the branch stalls. (branch stall cycle = delay slot)
32
MIPS Pipeline Control Path Modifications
All control signals can be determined during Decode and held in the state registers between pipeline stages How many set of control signal we should have ? Read Address Instruction Memory Add PC 4 Write Data Read Addr 1 Read Addr 2 Write Addr Register File Data 1 Data 2 16 32 ALU Shift left 2 Data IF/ID Sign Extend ID/EX EX/MEM MEM/WB Control
33
Other Pipeline Structures Are Possible
What about the (slow) multiply operation? Make the clock twice as slow or … let it take two cycles (since it doesn’t use the DM stage) ALU IM Reg DM MUL What if the data memory access is twice as slow as the instruction memory? make the clock twice as slow or … let data memory access take two cycles (and keep the same clock rate) Note that we don’t need the output of MUL until the WB cycle, so we can span two pipeline stages with the MUL hardware (so the multiplier is a two stage pipelined multiplier) ALU IM Reg DM1 DM2
34
Sample Pipeline Alternatives
ARM7 StrongARM-1 XScale IM Reg EX PC update IM access decode reg access ALU op DM access shift/rotate commit result (write back) ALU IM Reg DM ALU Reg DM2 IM1 IM2 Reg DM1 SHFT PC update BTB access start IM access decode reg 1 access DM write reg write ALU op shift/rotate reg 2 access start DM access exception IM access
35
Summary All modern day processors use pipelining
Pipelining doesn’t help latency of single task, it helps throughput of entire workload Potential speedup: a CPI of 1 and fast a CC Pipeline rate limited by slowest pipeline stage Unbalanced pipe stages makes for inefficiencies The time to “fill” pipeline and time to “drain” it can impact speedup for deep pipelines and short code runs Must detect and resolve hazards Stalling negatively affects CPI (makes CPI less than the ideal of 1)
36
Next Lecture Next lecture Overcoming data hazards
Reading assignment – PH, Chapter Readings
37
Readings How many pipeline stages in Intel architectures?
8/how-long-is-a-typical-modern-microprocessor-pipeline References Prescott Pushes Pipelining Limits The Intel Architecture Processor Pipeline List of Intel CPU Microarchitectures The Optimum Pipeline Depth for a Microprocessor
38
The Intel Core i7 and ARM Cortex-A8
13-stage pipeline Dynamic branch predictor with a 512-entry two-way set associative branch target buffer and a 4K-entry global history buffer, which is indexed by the branch history and the current PC. An incorrect prediction results in a 13- cycle penalty as the pipeline is flushed.
40
The execution pipeline for the A8 processor
The execution pipeline for the A8 processor. Either instruction 1 or instruction 2 can go to the load/store pipeline. Fully bypassing is supported among the pipelines. The ARM Cortex-A8 pipeline uses a simple two-issue statically scheduled superscalar to allow reasonably high clock rate with lower power. In contrast, the i7 uses a reasonably aggressive, four-issue dynamically scheduled speculative pipeline structure.
41
Performance of the A8 Pipeline
The A8 has an ideal CPI of 0.5 due to its dual-issue structure. Pipeline stalls can arise from three sources: Structure hazards, which occur because two adjacent instructions selected for issue simultaneously use the same functional pipeline. Since the A8 is statically scheduled, it is the compiler’s task to try to avoid such conflicts. When they cannot be avoided, the A8 can issue at most one instruction in that cycle. Data hazards, which are detected early in the pipeline and may stall either both instructions (if the first cannot issue, the second is always stalled) or the second of a pair. The compiler is responsible for preventing such stalls when possible. Control hazards, which arise only when branches are mispredicted. In addition to pipeline stalls, L1 and L2 misses both cause stalls.
44
Instruction fetch—The processor uses a multilevel branch target buffer to achieve a balance between speed and prediction accuracy. There is also a return address stack to speed up function return. Mispredictions cause a penalty of about 15 cycles. Using the predicted address, the instruction fetch unit fetches 16 bytes from the instruction cache.
45
The 16 bytes are placed in the predecode instruction buffer—In this step, a process called macro-op fusion is executed. Macro-op fusiontakes instruction combinations such as compare followed by a branch and fuses them into a single operation. The predecode stage also breaks the 16 bytes into individual x86 instructions. This predecode is nontrivial since the length of an x86 instruction can be from 1 to 17 bytes and the predecoder must look through a number of bytes before it knows the instruction length. Individual x86 instructions (including some fused instructions) are placed into the 18-entry instruction queue.
46
Micro-op decode—Individual x86 instructions are translated into micro-ops. Micro-ops are simple MIPS-like instructions that can be executed directly by the pipeline; this approach of translating the x86 instruction set into simple operations that are more easily pipelined was introduced in the Pentium Pro in 1997 and has been used since then. Three of the decoders handle x86 instructions that translate directly into one micro-op. For x86 instructions that have more complex semantics, there is a microcode engine that is used to produce the micro-op sequence; it can produce up to four micro-ops every cycle and continues until the necessary micro-op sequence has been generated. The micro-ops are placed according to the order of the x86 instructions in the 28-entry micro-op buffer.
47
The micro-op buffer preforms loop stream detection and microfusion—If there is a small sequence of instructions (less than 28 instructions or 256 bytes in length) that comprises a loop, the loop stream detector will find the loop and directly issue the micro-ops from the buffer, eliminating the need for the instruction fetch and instruction decode stages to be activated. Microfusion combines instruction pairs such as load/ALU operation and ALU operation/store and issues them to a single reservation station (where they can still issue independently), thus increasing the usage of the buffer.
48
Intel Core i7 - Nehalem Architecture
Perform the basic instruction issue —Looking up the register location in the register tables, renaming the registers, allocating a reorder buffer entry, and fetching any results from the registers or reorder buffer before sending the micro-ops to the reservation stations. The i7 uses a 36-entry centralized reservation station shared by six functional units. Up to six micro-ops may be dispatched to the functional units every clock cycle. Micro-ops are executed by the individual function units and then results are sent back to any waiting reservation station as well as to the register retirement unit, where they will update the register state, once it is known that the instruction is no longer speculative. The entry corresponding to the instruction in the reorder buffer is marked as complete. When one or more instructions at the head of the reorder buffer have been marked as complete, the pending writes in the register retirement unit are executed, and the instructions are removed from the reorder buffer.
49
Intel Core i7 - Nehalem Architecture
architecture of tech.net/hardware/cpus/2008/11/03/intel-core-i7- nehalem-architecture-dive/5
50
Cache Structure First level cache and TLBs remain the same at 32KB and 128 Entry (4-way associative) for data and instruction cache, but Nehalem also includes a low latency 2nd level, unified 512 entry unified TLB for Small Page (4k) data and instructions, up from a 256 Entry data-only L2 TLB in Penryn.
51
Single-thread pipeline performance of i7
Because of the presence of aggressive speculation as well as nonblocking caches, it is difficult to attribute the gap between idealized performance and actual performance accurately. As we will see, relatively few stalls occur because instructions cannot issue. For example, only about 3% of the loads are delayed because no reservation station is available. Most losses come either from branch mispredicts or cache misses. The cost of a branch mispredict is 15 cycles, while the cost of an L1 miss is about 10 cycles; L2 misses are slightly more than three times as costly as an L1 miss, and L3 misses cost about 13 times what an L1 miss costs (130–135 cycles)! Although the processor will attempt to find alternative instructions to execute for L3 misses and some L2 misses, it is likely that some of the buffers will fill before the miss completes, causing the processor to stop issuing instructions.
52
Performance of i7
53
Intel Core i7-4770K (22nm Haswell)
The pipeline isn't a physical structure in the obvious sense of the term, but rather a specialised set of hardware and queues designed to allow the chip to process multiple instructions at once, through the pipe. Conroe (2006) made use of a 14-stage pipeline, Nehalem (2009) saw the pipeline deepened to 16 stages, and in Sandy Bridge (2011) Intel introduced a decoded uop cache to make the current 14-stage pipeline more efficient than ever before.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.