Download presentation
Presentation is loading. Please wait.
Published byElmer Barker Modified over 9 years ago
1
Computer Architecture and Design – ELEN 350 Part 8 [Some slides adapted from M. Irwin, D. Paterson. D. Garcia and others]
2
Review: Single Cycle vs. Multiple Cycle Timing Clk Cycle 1 Multiple Cycle Implementation: IFetchDecExecMemWB Cycle 2Cycle 3Cycle 4Cycle 5Cycle 6Cycle 7Cycle 8Cycle 9Cycle 10 IFetchDecExecMem lwsw IFetch R-type Clk Single Cycle Implementation: lwsw Waste Cycle 1Cycle 2 multicycle clock slower than 1/5 th of single cycle clock due to stage register overhead
3
How Can We Make It Even Faster? Split the multiple instruction cycle into smaller and smaller steps l There is a point of diminishing returns where as much time is spent loading the state registers as doing the work Start fetching and executing the next instruction before the current one has completed l Pipelining – (all?) modern processors are pipelined for performance Fetch (and execute) more than one instruction at a time (out-of-order superscalar and VLIW ) Fetch (and execute) instructions from more than one instruction stream (multithreading)
4
Pipelining Assume 30 min. each task – wash, dry, fold, store – and that separate tasks use separate hardware and so can be overlapped Pipelined Not pipelined
5
A Pipelined MIPS Processor Start the next instruction before the current one has completed l improves throughput - total amount of work done in a given time l instruction latency (execution time, delay time, response time - time from the start of an instruction to its completion) is not reduced Cycle 1Cycle 2Cycle 3Cycle 4Cycle 5 IFetchDecExecMemWB lw Cycle 7Cycle 6Cycle 8 sw IFetchDecExecMemWB R-type IFetchDecExecMemWB -clock cycle (pipeline stage time) is limited by the slowest stage -for some instructions, some stages are wasted cycles
6
Single Cycle, Multiple Cycle, vs. Pipeline Multiple Cycle Implementation: Clk Cycle 1 IFetchDecExecMemWB Cycle 2Cycle 3Cycle 4Cycle 5Cycle 6Cycle 7Cycle 8Cycle 9Cycle 10 IFetchDecExecMem lwsw IFetch R-type lw IFetchDecExecMemWB Pipeline Implementation: IFetchDecExecMemWB sw IFetchDecExecMemWB R-type Clk Single Cycle Implementation: lwsw Waste Cycle 1Cycle 2
7
MIPS Pipeline Datapath Modifications What do we need to add/modify in our MIPS datapath? l State registers between each pipeline stage to isolate them ?
8
MIPS Pipeline Control Path Modifications All control signals can be determined during Instruction Decode Step l and held in the state registers between pipeline stages Read Address Instruction Memory Add PC 4 Write Data Read Addr 1 Read Addr 2 Write Addr Register File Read Data 1 Read Data 2 1632 ALU Shift left 2 Add Data Memory Address Write Data Read Data IF/ID Sign Extend ID/EX EX/MEM MEM/WB Control
9
Pipelining the MIPS ISA What makes it easy l all instructions have the same length (32 bits) -can fetch in the 1 st stage and decode in the 2 nd stage l few instruction formats (three) with symmetry across formats -can begin reading register file in 2 nd stage l memory operations can occur only in loads and stores -can use the execute stage to calculate memory addresses l each MIPS instruction writes at most one result (i.e., changes the machine state) and does so near the end of the pipeline (MEM and WB) What makes it hard l structural hazards: what if we had only one memory? l control hazards: what about branches? l data hazards: what if an instruction’s input operands depend on the output of a previous instruction?
10
Pipelined Datapath IF/ID Pipeline registers 55 16 RD1 RD2 RN1RN2 WD Register FileALU E X T N D 1632 RD WD Data Memory ADDR Instruction I 32 M U X <<2 RD Instruction Memory ADDR PC 4 ADD M U X 32 ID/EXEX/MEMMEM/WB Zero 64 bits 97 bits64 bits 128 bits wide enough to hold data coming in
11
Pipelined Example Consider the following instruction sequence: lw $t0, 10($t1) sw $t3, 20($t4) add $t5, $t6, $t7 sub $t8, $t9, $t10
12
Single-Clock-Cycle Diagram: Clock Cycle 1 LW
13
Single-Clock-Cycle Diagram: Clock Cycle 2 LWSW
14
Single-Clock-Cycle Diagram: Clock Cycle 3 LWSWADD
15
Single-Clock-Cycle Diagram: Clock Cycle 4 LWSWADD SUB
16
Single-Clock-Cycle Diagram: Clock Cycle 5 LWSWADDSUB
17
Single-Clock-Cycle Diagram: Clock Cycle 6 SWADDSUB
18
Single-Clock-Cycle Diagram: Clock Cycle 7 ADD SUB
19
Single-Clock-Cycle Diagram: Clock Cycle 8 SUB
20
Pipelined Datapath with Control I Same control signals as the single-cycle datapath
21
Pass control signals along just like the data – extend each pipeline register to hold needed control bits for succeeding stages Note: The 6-bit funct field of the instruction required in the EX stage to generate ALU control can be retrieved as the 6 least significant bits of the immediate field which is sign-extended and passed from the IF/ID register to the ID/EX register Pipeline Control Implementation
22
Pipelined Datapath with Control II Control signals emanate from the control portions of the pipeline registers
23
Pipelined Execution and Control Instruction sequence: Label “before ” means i th instruction before lw Clock cycle 1 lw $10, 20($1) sub $11, $2, $3 and $12, $4, $7 or $13, $6, $7 add $14, $8, $9
24
Clock cycle 2 lw $10, 20($1) sub $11, $2, $3 and $12, $4, $7 or $13, $6, $7 add $14, $8, $9
25
Clock cycle 3 lw $10, 20($1) sub $11, $2, $3 and $12, $4, $7 or $13, $6, $7 add $14, $8, $9
26
Clock cycle 4 lw $10, 20($1) sub $11, $2, $3 and $12, $4, $7 or $13, $6, $7 add $14, $8, $9
27
Clock cycle 5 lw $10, 20($1) sub $11, $2, $3 and $12, $4, $7 or $13, $6, $7 add $14, $8, $9
28
Clock cycle 6 lw $10, 20($1) sub $11, $2, $3 and $12, $4, $7 or $13, $6, $7 add $14, $8, $9
29
Clock cycle 7 lw $10, 20($1) sub $11, $2, $3 and $12, $4, $7 or $13, $6, $7 add $14, $8, $9
30
Clock cycle 8 lw $10, 20($1) sub $11, $2, $3 and $12, $4, $7 or $13, $6, $7 add $14, $8, $9
31
Clock cycle 9 lw $10, 20($1) sub $11, $2, $3 and $12, $4, $7 or $13, $6, $7 add $14, $8, $9
32
Can Pipelining Get Us Into Trouble? Yes: Pipeline Hazards l structural hazards: attempt to use the same resource by two different instructions at the same time l data hazards: attempt to use data before it is ready -An instruction’s source operand(s) are produced by a prior instruction still in the pipeline l control hazards: attempt to make a decision about program control flow before the condition has been evaluated and the new PC target address calculated -branch and jump instructions, exceptions Can always resolve hazards by waiting l pipeline control must detect the hazard l and take action to resolve hazards
33
Graphically Representing MIPS Pipeline Can help with answering questions like: l How many cycles does it take to execute this code? l What is the ALU doing during cycle 4? l Is there a hazard, why does it occur, and how can it be fixed? ALU IM Reg DMReg
34
Why Pipeline? For Performance! I n s t r. O r d e r Time (clock cycles) Inst 0 Inst 1 Inst 2 Inst 4 Inst 3 ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg Once the pipeline is full, one instruction is completed every cycle so complete an instruction every cycle (CPI = 1) Time to fill the pipeline
35
I n s t r. O r d e r Time (clock cycles) lw Inst 1 Inst 2 Inst 4 Inst 3 ALU Mem Reg MemReg ALU Mem Reg MemReg ALU Mem Reg MemReg ALU Mem Reg MemReg ALU Mem Reg MemReg A Single Memory Would Be a Structural Hazard Reading data from memory Reading instruction from memory Can fix with separate instr and data memories
36
How About Register File Access? I n s t r. O r d e r Time (clock cycles) add $1, Inst 1 Inst 2 add $2,$1, ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg
37
How About Register File Access? I n s t r. O r d e r Time (clock cycles) Inst 1 Inst 2 ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg Fix register file access hazard by doing reads in the second half of the cycle and writes in the first half add $1, add $2,$1, clock edge that controls register writing clock edge that controls loading of pipeline state registers
38
Register Usage Can Cause Data Hazards I n s t r. O r d e r add $1,$2,$5 sub $4,$1,$5 and $6,$1,$7 xor $4,$1,$5 or $8,$1,$9 ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg Dependencies backward in time cause hazards Read before write data hazard
39
Register Usage Can Cause Data Hazards ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg Dependencies backward in time cause hazards add $1,$2,$5 sub $4,$1,$5 and $6,$1,$7 xor $4,$1,$5 or $8,$1,$9 Read before write data hazard
40
Loads Can Cause Data Hazards lw $1,4($2) sub $4,$1,$5 and $6,$1,$7 xor $4,$1,$5 or $8,$1,$9 ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg Dependencies backward in time cause hazards
41
Loads Can Cause Data Hazards lw $1,4($2) sub $4,$1,$5 and $6,$1,$7 xor $4,$1,$5 or $8,$1,$9 ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg Dependencies backward in time cause hazards Load-use data hazard
42
stall One Way to “Fix” a Data Hazard add $1, ALU IM Reg DMReg sub $4,$1,$5 and $6,$1,$7 ALU IM Reg DMReg ALU IM Reg DMReg Can fix data hazard by waiting – stall
43
Another Way to “Fix” a Data Hazard add $1, ALU IM Reg DMReg sub $4,$1,$5 and $6,$1,$7 ALU IM Reg DMReg ALU IM Reg DMReg Fix data hazards by forwarding results as soon as they are available to where they are needed xor $4,$1,$5 or $8,$1,$9 ALU IM Reg DMReg ALU IM Reg DMReg
44
Another Way to “Fix” a Data Hazard ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg I n s t r. O r d e r add $1, sub $4,$1,$5 and $6,$1,$7 xor $4,$1,$5 or $8,$1,$9 Fix data hazards by forwarding results as soon as they are available to where they are needed
45
Forwarding Hardware with Control Datapath with forwarding hardware and control wires – certain details, e.g., branching hardware, are omitted to simplify the drawing Note: so far we have only handled forwarding to R-type instructions…!
46
Forwarding with Load-use Data Hazards I n s t r. O r d e r lw $1,4($2) sub $4,$1,$5 and $6,$1,$7 xor $4,$1,$5 or $8,$1,$9 ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg
47
Forwarding with Load-use Data Hazards ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg Will still need one stall cycle even with forwarding I n s t r. O r d e r lw $1,4($2) sub $4,$1,$5 and $6,$1,$7 xor $4,$1,$5 or $8,$1,$9
48
Adding the Hazard Hardware Read Address Instruction Memory Add PC 4 Write Data Read Addr 1 Read Addr 2 Write Addr Register File Read Data 1 Read Data 2 1632 ALU Shift left 2 Add Data Memory Address Write Data Read Data IF/ID Sign Extend ID/EX EX/MEM MEM/WB Control ALU cntrl Branch PCSrc Forward Unit Hazard Unit 0 1
49
Adding the Hazard Hardware Read Address Instruction Memory Add PC 4 Write Data Read Addr 1 Read Addr 2 Write Addr Register File Read Data 1 Read Data 2 1632 ALU Shift left 2 Add Data Memory Address Write Data Read Data IF/ID Sign Extend ID/EX EX/MEM MEM/WB Control ALU cntrl Branch PCSrc Forward Unit Hazard Unit 0 1 ID/EX.RegisterRt 0 ID/EX.MemRead
50
Stall Hardware Along with the Hazard Unit, we have to implement the stall Prevent the instructions in the IF and ID stages from progressing down the pipeline – done by preventing the PC register and the IF/ID pipeline register from changing Hazard detection Unit controls the writing of the PC ( PC.write ) and IF/ID ( IF/ID.write ) registers Insert a “bubble” between the lw instruction (in the EX stage) and the load-use instruction (in the ID stage) (i.e., insert a noop in the execution stream) Set the control bits in the EX, MEM, and WB control fields of the ID/EX pipeline register to 0 ( noop ). The Hazard Unit controls the mux that chooses between the real control values and the 0’s. Let the lw instruction and the instructions after it in the pipeline (before it in the code) proceed normally down the pipeline
51
Many Other Pipeline Structures Are Possible What about the (slow) multiply operation? l Make the clock twice as slow or … l let it take two cycles (since it doesn’t use the DM stage) ALU IM Reg DMReg MUL ALU IM Reg DM1Reg DM2 What if the data memory access is twice as slow as the instruction memory? l make the clock twice as slow or … l let data memory access take two cycles (and keep the same clock rate)
52
Control Hazards When the flow of instruction addresses is not sequential (i.e., PC = PC + 4) l Conditional branches ( beq, bne ) l Unconditional branches ( j, jal, jr ) l Exceptions
53
Branches Cause Control Hazards lw Inst 4 Inst 3 beq ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg Dependencies backward in time cause hazards I n s t r. O r d e r
54
Control Hazards Possible “solutions” l Stall (impacts performance) l Move branch decision point as early in the pipeline as possible, thereby reducing the number of stall cycles l Delay decision (requires compiler support) l Predict and hope for the best ! Control hazards occur less frequently than data hazards, but there is nothing as effective against control hazards as forwarding is for data hazards
55
flush One Way to “Fix” a Branch Control Hazard beq ALU IM Reg DMReg beq target ALU IM Reg DMReg ALU Inst 3 IM Reg DM Fix branch hazard by waiting – flush – but affects CPI ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg I n s t r. O r d e r
56
flush Another Way to “Fix” a Branch Control Hazard beq beq target ALU IM Reg DMReg Inst 3 ALU IM Reg DM Fix branch hazard by waiting – flush ALU IM Reg DMReg Move branch decision hardware back to as early in the pipeline as possible – i.e., during the decode cycle ALU IM Reg DMReg I n s t r. O r d e r
57
flush Jumps Incur One Stall j j target ALU IM Reg DMReg ALU IM Reg DMReg Fortunately, jumps are very infrequent – only 3% of the SPECint instruction mix Jumps not decoded until ID, so one flush is needed Fix jump hazard by waiting – flush ALU IM Reg DMReg I n s t r. O r d e r
58
Delayed Decision If the branch hardware has been moved to the ID stage, then we can eliminate all branch stalls with delayed branches which are defined as always executing the next sequential instruction after the branch instruction – the branch takes effect after that next instruction l MIPS compiler moves an instruction to immediately after the branch that is not affected by the branch (a safe instruction) thereby hiding the branch delay With deeper pipelines, the branch delay grows requiring more than one delay slot l Delayed branches have lost popularity compared to more expensive but more flexible (dynamic) hardware branch prediction l Growth in available transistors has made hardware branch prediction relatively cheaper
59
Scheduling Branch Delay Slots A is the best choice, fills delay slot and reduces IC In B and C, the sub instruction may need to be copied, increasing IC In B and C, must be okay to execute sub when branch fails add $1,$2,$3 if $2=0 then delay slot add $1,$2,$3 if $1=0 then delay slot add $1,$2,$3 if $1=0 then delay slot sub $4,$5,$6 if $2=0 then add $1,$2,$3 if $1=0 then sub $4,$5,$6 add $1,$2,$3 if $1=0 then sub $4,$5,$6
60
flush Yet Another Way to “Fix” a Control Hazard 4 beq $1,$2,2 I n s t r. O r d e r ALU IM Reg DMReg 16 and $6,$1,$7 20 or r8,$1,$9 ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg 8 sub $4,$1,$5 “Predict branches are always not taken – and take corrective action when wrong (i.e., taken)
61
Static Branch Prediction Resolve branch hazards by assuming a given outcome and proceeding without waiting to see the actual branch outcome 1. Predict not taken – always predict branches will not be taken, continue to fetch from the sequential instruction stream, only when branch is taken does the pipeline stall l If taken, flush instructions after the branch (earlier in the pipeline) -in IF, ID, and EX stages if branch logic in MEM – three stalls -In IF and ID stages if branch logic in EX – two stalls -in IF stage if branch logic in ID – one stall l ensure that those flushed instructions haven’t changed the machine state – automatic in the MIPS pipeline since machine state changing operations are at the tail end of the pipeline (MemWrite (in MEM) or RegWrite (in WB)) l restart the pipeline at the branch destination
62
Branching Structures Predict not taken works well for “top of the loop” branching structures Loop: beq $1,$2,Out 1 nd loop instr. last loop instr j Loop Out: fall out instr l But such loops have jumps at the bottom of the loop to return to the top of the loop – and incur the jump stall overhead Predict not taken doesn’t work well for “bottom of the loop” branching structures Loop: 1 st loop instr 2 nd loop instr. last loop instr bne $1,$2,Loop fall out instr
63
Static Branch Prediction, con’t Resolve branch hazards by assuming a given outcome and proceeding 2. Predict taken – predict branches will always be taken l Predict taken always incurs one stall cycle (if branch destination hardware has been moved to the ID stage) l Is there a way to “cache” the address of the branch target instruction ?? As the branch penalty increases (for deeper pipelines), a simple static prediction scheme will hurt performance. With more hardware, it is possible to try to predict branch behavior dynamically during program execution 3. Dynamic branch prediction – predict branches at run-time using run-time information
64
Dynamic Branch Prediction A branch prediction buffer (aka branch history table (BHT)) in the IF stage addressed by the lower bits of the PC, contains a bit passed to the ID stage through the IF/ID pipeline register that tells whether the branch was taken the last time it was executed l Prediction bit may predict incorrectly (may be a wrong prediction for this branch this iteration or may be from a different branch with the same low order PC bits) but it doesn’t affect correctness, just performance -Branch decision occurs in the ID stage after determining that the fetched instruction is a branch and checking the prediction bit l If the prediction is wrong, flush the incorrect instruction(s) in pipeline, restart the pipeline with the right instruction, and invert the prediction bit -A 4096 bit BHT varies from 1% misprediction (nasa7, tomcatv) to 18% (eqntott)
65
Branch Target Buffer The BHT predicts when a branch is taken, but does not tell where its taken to! l A branch target buffer (BTB) in the IF stage can cache the branch target address, but we also need to fetch the next sequential instruction. The prediction bit in IF/ID selects which “next” instruction will be loaded into IF/ID at the next clock edge -Would need a two read port instruction memory If the prediction is correct, stalls can be avoided no matter which direction they go l Or the BTB can cache the branch taken instruction while the instruction memory is fetching the next sequential instruction Read Address Instruction Memory PC 0 BTB
66
1-bit Prediction Accuracy A 1-bit predictor will be incorrect twice when not taken For 10 times through the loop we have a 80% prediction accuracy for a branch that is taken 90% of the time l Assume predict_bit = 0 to start (indicating branch not taken) and loop control is at the bottom of the loop code 1. First time through the loop, the predictor mispredicts the branch since the branch is taken back to the top of the loop; invert prediction bit (predict_bit = 1) 2. As long as branch is taken (looping), prediction is correct 3. Exiting the loop, the predictor again mispredicts the branch since this time the branch is not taken falling out of the loop; invert prediction bit (predict_bit = 0 ) Loop: 1 st loop instr 2 nd loop instr. last loop instr bne $1,$2,Loop fall out instr
67
2-bit Predictors A 2-bit scheme can give 90% accuracy since a prediction must be wrong twice before the prediction bit is changed Predict Taken Predict Not Taken Predict Taken Predict Not Taken Taken Not taken Taken Loop: 1 st loop instr 2 nd loop instr. last loop instr bne $1,$2,Loop fall out instr
68
2-bit Predictors A 2-bit scheme can give 90% accuracy since a prediction must be wrong twice before the prediction bit is changed Predict Taken Predict Not Taken Predict Taken Predict Not Taken Taken Not taken Taken Loop: 1 st loop instr 2 nd loop instr. last loop instr bne $1,$2,Loop fall out instr wrong on loop fall out 0 1 1 right 9 times right on 1 st iteration 0 BHT also stores the initial FSM state 10101 0101 0
69
Pipelining Summary All modern day processors use pipelining Pipelining doesn’t help latency of single task, it helps throughput of entire workload Potential speedup: a really fast clock cycle and able to complete one instruction every clock cycle (CPI) Pipeline rate limited by slowest pipeline stage l Unbalanced pipe stages makes for inefficiencies l The time to “fill” pipeline and time to “drain” it can impact speedup for deep pipelines and short code runs Must detect and resolve hazards l Stalling negatively affects CPI (makes CPI less than the ideal of 1)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.