Presentation is loading. Please wait.

Presentation is loading. Please wait.

1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.

Similar presentations


Presentation on theme: "1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy."— Presentation transcript:

1 1  2004 Morgan Kaufmann Publishers Chapter Six

2 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy

3 3  2004 Morgan Kaufmann Publishers Pipelining Improve performance by increasing instruction throughput Ideal speedup is number of stages in the pipeline. Do we achieve this? Note: timing assumptions changed for this example

4 4  2004 Morgan Kaufmann Publishers Pipelining What makes it easy –all instructions are the same length –just a few instruction formats –memory operands appear only in loads and stores What makes it hard? –structural hazards: suppose we had only one memory –control hazards: need to worry about branch instructions –data hazards: an instruction depends on a previous instruction We’ll build a simple pipeline and look at these issues We’ll talk about modern processors and what really makes it hard: –exception handling –trying to improve performance with out-of-order execution, etc.

5 5  2004 Morgan Kaufmann Publishers Basic Idea What do we need to add to actually split the datapath into stages?

6 6  2004 Morgan Kaufmann Publishers Pipelined Datapath Can you find a problem even if there are no dependencies? What instructions can we execute to manifest the problem?

7 7  2004 Morgan Kaufmann Publishers Corrected Datapath

8 8  2004 Morgan Kaufmann Publishers Inst. Flow in A Pipelined Datapath: IF & ID Stages

9 9  2004 Morgan Kaufmann Publishers Inst. Flow in A Pipelined Datapath: EX Stage

10 10  2004 Morgan Kaufmann Publishers Inst. Flow in A Pipelined Datapath: MEM & WB Stages

11 11  2004 Morgan Kaufmann Publishers Graphically Representing Pipelines Can help with answering questions like: –how many cycles does it take to execute this code? –what is the ALU doing during cycle 4? –use this representation to help understand datapaths

12 12  2004 Morgan Kaufmann Publishers Pipeline Control

13 13  2004 Morgan Kaufmann Publishers We have 5 stages. What needs to be controlled in each stage? –Instruction Fetch and PC Increment –Instruction Decode / Register Fetch –Execution –Memory Stage –Write Back How would control be handled in an automobile plant? –a fancy control center telling everyone what to do? –should we use a finite state machine? Pipeline control

14 14  2004 Morgan Kaufmann Publishers Pass control signals along just like the data Pipeline Control

15 15  2004 Morgan Kaufmann Publishers Datapath with Control

16 16  2004 Morgan Kaufmann Publishers Problem with starting next instruction before first is finished –dependencies that “go backward in time” are data hazards Dependencies

17 17  2004 Morgan Kaufmann Publishers Have compiler guarantee no hazards by forcing the “consumer” instruction to wait (i.e., inserting “no-ops”) Where do we insert the “no-ops” ? sub$2, $1, $3 and $12, $2, $5 or$13, $6, $2 add$14, $2, $2 sw$15, 100($2) Problem: this really slows us down! Software Solution

18 18  2004 Morgan Kaufmann Publishers Use temporary results, don’t wait for them to be written –register file forwarding to handle read/write to same register –ALU forwarding Forwarding what if this $2 was $13?

19 19  2004 Morgan Kaufmann Publishers Forwarding The main idea 1.Detect conditions for dependencies (e.g., Rd I_earlier =? Rs I_current or Rt I_current ) 2.If condition true, select the forward input of the mux for ALU, otherwise select the normal mux input

20 20  2004 Morgan Kaufmann Publishers Load word can still cause a hazard: –an instruction tries to read a register following a load instruction that writes to the same register. Thus, we need a hazard detection unit to “stall” the load instruction Can't always forward

21 21  2004 Morgan Kaufmann Publishers Stalling We can stall the pipeline by keeping an instruction in the same stage

22 22  2004 Morgan Kaufmann Publishers Hazard Detection Unit Stall by letting an instruction that won’t write anything (i.e. “nop”) go forward

23 23  2004 Morgan Kaufmann Publishers When we decide to branch, other instructions are in the pipeline!pipeline We are predicting “branch not taken” –need to add hardware for flushing the three instructions already in the pipeline if we are wrong –Mis-prediction penalty = 3 cycles Branch Hazards

24 24  2004 Morgan Kaufmann Publishers Flushing Instructions Note: we’ve also moved branch decision to ID stage to reducebranch decision Branch penalty (from 3 to 1)

25 25  2004 Morgan Kaufmann Publishers Branches If the branch is taken, we have a penalty of one cycle For our simple design, this is reasonable With deeper pipelines, penalty increases and static branch prediction drastically hurts performance Solution: dynamic branch prediction A 2-bit prediction scheme

26 26  2004 Morgan Kaufmann Publishers Branch Prediction Sophisticated Techniques: –A “branch target buffer” to help us look up the destination –Correlating predictors that base prediction on global behavior and recently executed branches (e.g., prediction for a specific branch instruction based on what happened in previous branches) –Tournament predictors that use different types of prediction strategies and keep track of which one is performing best. –A “branch delay slot” which the compiler tries to fill with a useful instruction (make the one cycle delay part of the ISA) Branch prediction is especially important because it enables other more advanced pipelining techniques to be effective! Modern processors predict correctly 95% of the time!

27 27  2004 Morgan Kaufmann Publishers Improving Performance Try and avoid stalls! E.g., reorder these instructions: lw $t0, 0($t1) lw $t2, 4($t1) sw $t2, 0($t1) sw $t0, 4($t1) Dynamic Pipeline Scheduling –Hardware chooses which instructions to execute next –Will execute instructions out of order (e.g., doesn’t wait for a dependency to be resolved, but rather keeps going!) –Speculates on branches and keeps the pipeline full (may need to rollback if prediction incorrect) Trying to exploit instruction-level parallelism

28 28  2004 Morgan Kaufmann Publishers Advanced Pipelining Increase the depth of the pipeline Start more than one instruction each cycle (multiple issue) Loop unrolling to expose more ILP (better scheduling) “Superscalar” processors –DEC Alpha 21264: 9 stage pipeline, 6 instruction issue All modern processors are superscalar and issue multiple instructions usually with some limitations (e.g., different “pipes”) VLIW: very long instruction word, static multiple issue (relies more on compiler technology) This class has given you the background you need to learn more!

29 29  2004 Morgan Kaufmann Publishers Chapter 6 Summary Pipelining does not improve latency, but does improve throughput


Download ppt "1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy."

Similar presentations


Ads by Google