Pipelined Implementation

Slides:



Advertisements
Similar presentations
Randal E. Bryant Carnegie Mellon University CS:APP CS:APP Chapter 4 Computer Architecture PipelinedImplementation Part II CS:APP Chapter 4 Computer Architecture.
Advertisements

Lecture 4: CPU Performance
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
Pipelining I Topics Pipelining principles Pipeline overheads Pipeline registers and stages Systems I.
1 Seoul National University Wrap-Up. 2 Overview Seoul National University Wrap-Up of PIPE Design  Exception conditions  Performance analysis Modern.
Real-World Pipelines: Car Washes Idea  Divide process into independent stages  Move objects through stages in sequence  At any instant, multiple objects.
PipelinedImplementation Part I PipelinedImplementation.
Instructor: Erol Sahin
– 1 – Chapter 4 Processor Architecture Pipelined Implementation Chapter 4 Processor Architecture Pipelined Implementation Instructor: Dr. Hyunyoung Lee.
PipelinedImplementation Part I CSC 333. – 2 – Overview General Principles of Pipelining Goal Difficulties Creating a Pipelined Y86 Processor Rearranging.
Appendix A Pipelining: Basic and Intermediate Concepts
Randal E. Bryant CS:APP Chapter 4 Computer Architecture SequentialImplementation CS:APP Chapter 4 Computer Architecture SequentialImplementation Slides.
Pipelining. Overview Pipelining is widely used in modern processors. Pipelining improves system performance in terms of throughput. Pipelined organization.
Datapath Design II Topics Control flow instructions Hardware for sequential machine (SEQ) Systems I.
Pipelining III Topics Hazard mitigation through pipeline forwarding Hardware support for forwarding Forwarding to mitigate control (branch) hazards Systems.
David O’Hallaron Carnegie Mellon University Processor Architecture PIPE: Pipelined Implementation Part I Processor Architecture PIPE: Pipelined Implementation.
Data Hazard Solution 2: Data Forwarding Our naïve pipeline would experience many data stalls  Register isn’t written until completion of write-back stage.
1 Seoul National University Pipelined Implementation : Part I.
1 Naïve Pipelined Implementation. 2 Outline General Principles of Pipelining –Goal –Difficulties Naïve PIPE Implementation Suggested Reading 4.4, 4.5.
Pipelining (I). Pipelining Example  Laundry Example  Four students have one load of clothes each to wash, dry, fold, and put away  Washer takes 30.
Randal E. Bryant Carnegie Mellon University CS:APP CS:APP Chapter 4 Computer Architecture SequentialImplementation CS:APP Chapter 4 Computer Architecture.
Datapath Design I Topics Sequential instruction execution cycle Instruction mapping to hardware Instruction decoding Systems I.
Computer Architecture: Wrap-up CENG331 - Computer Organization Instructors: Murat Manguoglu(Section 1) Erol Sahin (Section 2 & 3) Adapted from slides of.
1 Sequential CPU Implementation. 2 Outline Logic design Organizing Processing into Stages SEQ timing Suggested Reading 4.2,4.3.1 ~
Pipeline Architecture I Slides from: Bryant & O’ Hallaron
PipelinedImplementation Part II PipelinedImplementation.
Computer Architecture: Pipelined Implementation - I
Computer Architecture adapted by Jason Fritts
Pipelining IV Topics Implementing pipeline control Pipelining and performance analysis Systems I.
1 SEQ CPU Implementation. 2 Outline SEQ Implementation Suggested Reading 4.3.1,
Randal E. Bryant Carnegie Mellon University CS:APP2e CS:APP Chapter 4 Computer Architecture PipelinedImplementation Part II CS:APP Chapter 4 Computer Architecture.
Sequential Hardware “God created the integers, all else is the work of man” Leopold Kronecker (He believed in the reduction of all mathematics to arguments.
Rabi Mahapatra CS:APP3e Slides are from Authors: Bryant and O Hallaron.
Real-World Pipelines Idea –Divide process into independent stages –Move objects through stages in sequence –At any given times, multiple objects being.
1 Pipelined Implementation. 2 Outline Handle Control Hazard Handle Exception Performance Analysis Suggested Reading 4.5.
Real-World Pipelines Idea Divide process into independent stages
Stalling delays the entire pipeline
Lecture 13 Y86-64: SEQ – sequential implementation
Systems I Pipelining IV
Lecture 14 Y86-64: PIPE – pipelined implementation
Course Outline Background Sequential Implementation Pipelining
Sequential Implementation
Administrivia Midterm to be posted on Tuesday after class
Samira Khan University of Virginia Feb 14, 2017
Morgan Kaufmann Publishers The Processor
Lecture 6: Advanced Pipelines
Pipelined Implementation : Part I
Seoul National University
Seoul National University
Instruction Decoding Optional icode ifun valC Instruction Format
Pipelined Implementation : Part II
Systems I Pipelining III
Computer Architecture adapted by Jason Fritts
Systems I Pipelining II
Pipelined Implementation : Part I
Seoul National University
Systems I Pipelining I Topics Pipelining principles Pipeline overheads
Pipeline Architecture I Slides from: Bryant & O’ Hallaron
Pipelined Implementation : Part I
Pipelined Implementation
Pipelined Implementation
Chapter 8. Pipelining.
Computer Architecture
Systems I Pipelining II
Chapter 4 Processor Architecture
Systems I Pipelining II
Real-World Pipelines: Car Washes
Sequential CPU Implementation
Sequential Design תרגול 10.
Pipelining Hazards.
Presentation transcript:

Pipelined Implementation

Outline General Principles of Pipelining PIPE Implementation Suggested Reading 4.4, 4.5

Does not make good use of hardware units Problem of SEQ Too slow Too many tasks needed to finish in one clock cycle Signals need long time to propagate through all of the stages The clock must run slowly enough Does not make good use of hardware units Every unit is active for part of the total clock cycle

Pipeline Principles

Real-World Pipelines: Car Washes Sequential Parallel Idea Divide process into independent stages Move objects through stages in sequence At any given times, multiple objects being processed Pipelined

Computational Example Combinational logic R e g 300 ps 20 ps Clock Delay = 320 ps Throughput = 3.12 GOPS System Computation requires total of 300 picoseconds Additional 20 picoseconds to save result in register Can must have clock cycle of at least 320 ps Clock Cycle是320ps

3-Way Pipelined Version g Clock Comb. logic A B C 100 ps 20 ps Delay = 360 ps Throughput = 8.33 GOPS System Divide combinational logic into 3 blocks of 100 ps each Can begin new operation as soon as previous one passes through stage A. Begin new operation every 120 ps Overall latency increases 360 ps from start to finish Clock Cycle是120ps。 之所以要增加阶段寄存器是因为B要用A的结果,原来B和A执行的是同一条指令,所以内容都是属于同一条指令的。 现在B和A在一个clock cycle里执行的是两条指令的内容,而B所需要的A的内容是上个周期中A产生的。所以必须保留下来给B用。所以要增加阶段间的寄存器。用以存放中间结果。

Pipeline Diagrams Unpipelined 3-Way Pipelined Cannot start new operation until previous one completes 3-Way Pipelined Up to 3 operations in process simultaneously Time OP1 OP2 OP3 Time A B C OP1 OP2 OP3 这就是时空图。 OP表示一条指令。 原来Unpipelined时候一条指令执行完了,才能执行另外一条指令。现在几条指令可以overlap。

Operating a Pipeline 100 ps 20 ps Comb. logic A B C Clock 300 Clock 359 R e g Clock Comb. logic A B C 100 ps 20 ps 241 R e g Clock Comb. logic A B C 100 ps 20 ps 239 Time OP1 OP2 OP3 A B C 120 240 360 480 640 Clock 褐色表示第一条指令OP1 蓝色表示第二条指令OP2 灰色表示第三条指令OP3

Limitations: Nonuniform Delays g Clock Comb. logic B C 50 ps 20 ps 150 ps 100 ps Delay = 510 ps Throughput = 5.88 GIPS A Time OP1 OP2 OP3 A B C Throughput limited by slowest stage Other stages sit idle for much of the time Challenging to partition system into balanced stages

Limitations: Register Overhead Delay = 420 ps, Throughput = 14.29 GIPS Clock R e g Comb. logic 50 ps 20 ps As try to deepen pipeline, overhead of loading registers becomes more significant Percentage of clock cycle spent loading register: 1-stage pipeline: 6.25% 3-stage pipeline: 16.67% 6-stage pipeline: 28.57% High speeds of modern processor designs obtained through very deep pipelining

Data Dependencies System R Combinational e logic g Clock OP1 OP2 OP3 Time OP1 OP2 OP3 System Each operation depends on result from preceding one

Data Dependencies in Processors 1 irmovq $50, %rax 2 addq %rax , %rbx 3 mrmovq 100( %rbx ), %rdx Result from one instruction used as operand for another Read-after-write (RAW) dependency Very common in actual programs Must make sure our pipeline handles these properly Get correct results Minimize performance impact

Data Hazards R e g Clock Comb. logic A B C Time OP1 OP2 OP3 A B C OP4 Result does not feed back around in time for next operation Pipelining has changed behavior of system

Naïve PIPE Implementation

SEQ Hardware Stages occur in sequence One operation in process at a time

SEQ+ Hardware PC Stage Processor State Still sequential implementation Reorder PC stage to put at beginning PC Stage Task is to select PC for current instruction Based on results computed by previous instruction Processor State PC is no longer stored in register But, can determine PC based on other stored information

Adding Pipeline Registers Instruction memory PC increment CC ALU Data Fetch Decode Execute Memory Write back icode , ifun rA rB valC Register file A B M E valP srcA srcB dstA dstB valA valB aluA aluB Cnd valE Addr , Data valM newPC

Pipeline Stages Fetch Decode Execute Memory Write Back Select current PC Read instruction Compute incremented PC Decode Read program registers Execute Operate ALU Memory Read or write data memory Write Back Update register file

Forward (Upward) Paths PIPE- Hardware Pipeline registers hold intermediate values from instruction execution Forward (Upward) Paths Values passed from one stage to next Cannot jump past stages e.g., valC passes through decode

Feedback Paths Predicted PC Branch information Return point Guess value of next PC Branch information Jump taken/not-taken Fall-through or target address Return point Read from memory Register updates To register file write ports

Signal Naming Conventions S_Field Value of Field held in stage S pipeline register s_Field Value of Field computed in stage S

Pipeline Demonstration irmovq $1,%rax #I1 1 2 3 4 5 6 7 8 9 F D E M W irmovq $2,%rcx #I2 irmovq $3,%rdx #I3 irmovq $4,%rbx #I4 halt #I5 Cycle 5 I1 I2 I3 I4 I5 File: demo-basic.ys

Data Dependencies: No Nop 0x000: irmovq $10,% rdx 1 2 3 4 5 6 7 8 F D E M W 0x00a: $3,% rax 0x014: addq % ,% 0x016: halt # demo-h0.ys valA f R[ ] = valB Cycle 4 Error M_ valE = 10 dstE e_ 0 + 3 = 3 E_

Data Dependencies: 1 Nop 0x000: irmovq $10,% rdx 1 2 3 4 5 6 7 8 9 F D E M W 0x00a: $3,% rax 0x014: nop 0x015: addq % ,% 0x017: halt # demo-h1.ys R[ ] f 10 valA = valB • Cycle 5 Error M_ valE = 3 dstE

Data Dependencies: 2 Nop’s 0x000: irmovq $10,% rdx 1 2 3 4 5 6 7 8 9 F D E M W 0x00a: $3,% rax 0x014: nop 0x015: 0x016: addq % ,% 0x018: halt 10 # demo-h2.ys R[ ] f valA = valB • Cycle 6 Error

Data Dependencies: 3 Nop’s 0x000: irmovq $10,% rdx 1 2 3 4 5 6 7 8 9 F D E M W 0x00a: $3,% rax 0x014: nop 0x015: 0x016: 0x017: addq % ,% 10 R[ ] f valA = valB # demo-h3.ys Cycle 6 11 0x019: halt Cycle 7

Branch Misprediction Example demo-j.ys 0x000: xorq %rax,%rax 0x002: jne t # Not taken 0x00b: irmovq $1, %rax # Fall through 0x015: nop 0x016: nop 0x017: nop 0x018: halt 0x019: t: irmovq $3, %rdx # Target (Should not execute) 0x023: irmovq $4, %rcx # Should not execute 0x02d: irmovq $5, %rdx # Should not execute Should only execute first 8 instructions

Branch Misprediction Trace 0x000: xorq % rax ,% 1 2 3 4 5 6 7 8 9 F D E M W 0x002: jne t # Not taken 0x019: t: irmovq $3, % rdx # Target 0x023: $4, % rcx # Target+1 0x00b: $1, % # Fall Through # demo - j Cycle 5 valE f dstE = M_Cnd = M_ valA = 0x007 valC ecx rB Incorrectly execute two instructions at branch target

Return Example Require lots of nops to avoid data hazards demo-ret.ys 0x000: irmovq Stack,%rsp # Intialize stack pointer 0x00a: nop # Avoid hazard on %rsp 0x00b: nop 0x00c: nop 0x00d: call p # Procedure call 0x016: irmovq $5,%rsi # Return point 0x020: halt 0x020: .pos 0x20 0x020: p: nop # procedure 0x021: nop 0x022: nop 0x023: ret 0x024: irmovq $1,%rax # Should not be executed 0x02e: irmovq $2,%rcx # Should not be executed 0x038: irmovq $3,%rdx # Should not be executed 0x042: irmovq $4,%rbx # Should not be executed 0x100: .pos 0x100 0x100: Stack: # Initial stack pointer Require lots of nops to avoid data hazards

Incorrect Return Example Incorrectly execute 3 instructions following ret

Pipeline Summary Concept Limitations Break instruction execution into 5 stages Run instructions through in pipelined mode Limitations Can’t handle dependencies between instructions when instructions follow too closely Data dependencies One instruction writes register, later one reads it Control dependency Instruction sets PC in way that pipeline did not predict correctly Mispredicted branch and return

PIPE Implementation

Predicting the PC Start fetch of new instruction after current one has completed fetch stage Not enough time to reliably determine next instruction Guess which instruction will follow Recover if prediction was incorrect 之所以要Predict PC是因为我们在有些时候没有办法在指令刚完成fetch就知道他的下一条指令在哪里。 这种情况出现在我们碰到条件跳转和ret指令时。条件跳转必须到execute阶段执行完才能知道是否跳转。Ret必须等到memory阶段完成才能知道下一条指令的地址。 其他情况,我们可以在fetch阶段结束时就知道下一条指令的地址。对于call和jmp来说,下一条地址就是valC。对于其他指令来说,就是valP。 但我们又希望每个周期处理一条指令,所以就必须在下一条指令地址还没能完全确定的时候就进行取指令。所以需要对下一条指令的地址进行猜测。 具体的实现在Section 5.12. 这里先不介绍。 因为Predict PC只是根据前一条指令预测下一条指令的地址,所以它只需要从fetch阶段取得信号就可以了。

Predicting the PC 之所以要Predict PC是因为我们在有些时候没有办法在指令刚完成fetch就知道他的下一条指令在哪里。 这种情况出现在我们碰到条件跳转和ret指令时。条件跳转必须到execute阶段执行完才能知道是否跳转。Ret必须等到memory阶段完成才能知道下一条指令的地址。 其他情况,我们可以在fetch阶段结束时就知道下一条指令的地址。对于call和jmp来说,下一条地址就是valC。对于其他指令来说,就是valP。 但我们又希望每个周期处理一条指令,所以就必须在下一条指令地址还没能完全确定的时候就进行取指令。所以需要对下一条指令的地址进行猜测。 具体的实现在Section 5.12. 这里先不介绍。 因为Predict PC只是根据前一条指令预测下一条指令的地址,所以它只需要从fetch阶段取得信号就可以了。

Our Prediction Strategy Instructions that Don’t Transfer Control Predict next PC to be valP Always reliable Call and Unconditional Jumps Predict next PC to be valC (destination) Conditional Jumps Only correct if branch is taken Typically right 60% of time Return Instruction Don’t try to predict 对于conditional jump,我们总是预测他会跳转。 对于ret, 可能的情况太多,所以我们就不预测。 (实际上有些策略可以解决) 对于预测,我们必须保留它另一个分支的地址。因为预测不一定准确,所以我们需要在预测错误时恢复到另一个入口执行。

Select PC Int F_predPC = [ f_icode in {IJXX, ICALL} : f_valC; rB valC valP icode ifun rA Predict PC Instruction Instruction Memory PC increment Fetch f_PC M_Cnd M_valA Select PC M_icode W_valM W_icode F predPC Int F_predPC = [ f_icode in {IJXX, ICALL} : f_valC; 1: f_valP; ];

Recovering from PC Misprediction Mispredicted Jump Will see branch flag once instruction reaches memory stage Can get fall-through PC from valA Return Instruction Will get return PC when ret reaches write-back stage 预测分为两步: 分支的预测 预测错误时的恢复。

Select PC int f_PC = [ #mispredicted branch. Fetch at incremented PC M_icode == IJXX && !M_Cnd : M_valA; #completion of RET instruciton W_icode == IRET : W_valM; #default: Use predicted value of PC 1: F_predPC ]; 对于mispredicted的情况,因为默认是跳转,所以这时该执行的指令地址是branch指令的下一条指令。这条指令的地址已经被放在valA中传递到了 Memory阶段,所以要从M_valA中恢复。 在实际中要根据预测的方法来确定new_F_predPC和f_PC的计算方法。关键的一点是预测时必须保证另一个出口地址被妥善的保存好。

Classes of Data Hazards Hazards can potentially occur when one instruction updates part of the program state that read by a later instruction Program states: Program registers The hazards already identified. Condition codes Both written and read in the execute stage. No hazards can arise Program counter Conflicts between updating and reading PC cause control hazards Memory Both written and read in the memory stage. Without self-modified code, no hazards. Data Dependency引起了PIPELINE的中断,那么这种Data Dependency就称为Data Hazard. 同理,control dependency引起control hazard。 Memory在memory阶段被当作数据存取,在fetch阶段被当作代码存取。如果代码会修改自身,那么就有可能出现hazard。如果不允许,那么就不会发生hazard.

Data Dependencies: 2 Nop’s 0x000: irmovq $10,% rdx 1 2 3 4 5 6 7 8 9 F D E M W 0x00a: $3,% rax 0x014: nop 0x015: 0x016: addq % ,% 0x018: halt 10 # demo-h2.ys R[ ] f valA = valB • Cycle 6 Error

Stalling for Data Dependencies 1 2 3 4 5 6 7 8 9 10 11 # demo-h2.ys 0x000: irmovq $10,%rdx F D E M W 0x00a: irmovq $3,%rax F D E M W 0x014: nop F D E M W 0x015: nop F D E M W bubble F E M W 0x016: addq %rdx,%rax D D E M W 0x018: halt F F D E M W If instruction follows too closely after one that writes register, slow it down Hold instruction in decode Dynamically inject nop into execute stage

Stall Condition Source Registers Destination Registers Special Case srcA and srcB of current instruction in decode stage Destination Registers dstE and dstM fields Instructions in execute, memory, and write-back stages Special Case Don’t stall for register ID 15 (0xF) Indicates absence of register operand Or failed cond. move

Detecting Stall Condition 1 2 3 4 5 6 7 8 9 10 11 # demo-h2.ys 0x000: irmovq $10,%rdx F D E M W 0x00a: irmovq $3,%rax F D E M W 0x014: nop F D E M W 0x015: nop F D E M W bubble F E M W 0x016: addq %rdx,%rax D D E M W 0x018: halt F F D E M W Cycle 6 W D • W_dstE = %rax W_valE = 3 srcA = %rdx srcB = %rax

Stalling X3 F D E M W F D E M W E M W E M W E M W F D D D D E M W F F 1 2 3 4 5 6 7 8 9 10 11 # demo-h0.ys 0x000: irmovq $10,%rdx F D E M W 0x00a: irmovq $3,%rax F D E M W bubble E M W bubble E M W bubble E M W 0x014: addq %rdx,%rax F D D D D E M W 0x016: halt F F F F D E M W Cycle 6 W W_dstE = %rax Cycle 5 M M_dstE = %rax Cycle 4 • E e_dstE = %rax • D srcA = %rdx srcB = %rax D srcA = %rdx srcB = %rax D srcA = %rdx srcB = %rax

What Happens When Stalling? 0x000: irmovq $10,%rdx 0x00a: irmovq $3,%rax 0x014: addq %rdx,%rax # demo-h0.ys 0x016: halt bubble 0x014: addq %rdx,%rax Cycle 7 0x016: halt bubble Cycle 8 0x014: addq %rdx,%rax 0x016: halt 0x00a: irmovq $3,%rax bubble 0x014: addq %rdx,%rax Cycle 6 0x016: halt 0x000: irmovq $10,%rdx 0x00a: irmovq $3,%rax bubble 0x014: addq %rdx,%rax Cycle 5 0x016: halt 0x000: irmovq $10,%rdx 0x00a: irmovq $3,%rax 0x014: addq %rdx,%rax Cycle 4 0x016: halt Write Back Memory Execute Decode Fetch Stalling instruction held back in decode stage Following instruction stays in fetch stage Bubbles injected into execute stage Like dynamically generated nop’s Move through later stages

Implementing Stalling Pipeline Control Combinational logic detects stall condition Sets mode signals for how pipeline registers should update E M W三个阶段的dstM dstE和D阶段的srcA srcB比较,所以这几个信号必然都要进入Pipe control logic. E_bubble,D_stall和F_stall作为该控制逻辑的输出,控制bubble和stall行为。

Implementing Stalling W F D CC rB srcA srcB icode valE valM dstE dstM Cnd valA ifun valC valB valP rA predPC d_srcB d_srcA e_Cnd D_icode E_icode M_icode E_dstM Pipe control logic D_bubble D_stall E_bubble F_stall M_bubble W_stall set_cc stat W_stat m_stat E M W三个阶段的dstM dstE和D阶段的srcA srcB比较,所以这几个信号必然都要进入Pipe control logic. E_bubble,D_stall和F_stall作为该控制逻辑的输出,控制bubble和stall行为。

Pipeline Register Modes Rising clock _ Output = y y Output = x Input = y stall = 0 bubble x Normal Rising clock _ Output = x x Output = x Input = y stall = 1 bubble = 0 x Stall 上图中的stall信号和bubble信号正是作用于这种特殊的寄存器才能起作用。 Bubble和stall同时为1是不允许的。 n o p Rising clock _ Output = nop Output = x Input = y stall = 0 bubble = 1 Bubble x x

Data Forwarding Naïve Pipeline Observation Trick Register isn’t written until completion of write-back stage Source operands read from register file in decode stage Needs to be in register file at start of stage Observation Value generated in execute or memory stage Trick Pass value directly from generating instruction to decode stage Needs to be available at end of decode stage

Data Forwarding Example irmovq $10,% rdx 1 2 3 4 5 6 7 8 9 F D E M W 0x00a: $3,% rax 0x014: nop 0x015: 0x016: addq % ,% 0x018: halt 10 # demo-h2.ys Cycle 6 R[ ] f valA = valB W_ valE • dstE = 3 srcA srcB irmovq in write-back stage Destination value in W pipeline register Forward as valB for decode stage

Bypass Paths Decode Stage Forwarding Sources Forwarding logic selects valA and valB Normally from register file Forwarding: get valA or valB from later pipeline stage Forwarding Sources Execute: valE Memory: valE, valM Write back: valE, valM

Data Forwarding Example #2 0x000: irmovq $10,%rdx 1 2 3 4 5 6 7 8 F D E M W 0x00a: irmovq $3,%rax 0x014: addq %rdx,%rax 0x016: halt # demo-h0.ys Cycle 4 valA f M_valE = 10 valB f e_valE = 3 M_dstE = %rdx M_valE = 10 srcA = %rdx srcB = %rax E_dstE = %rax e_valE f 0 + 3 = 3 Register %rdx Generated by ALU during previous cycle Forward from memory as valA Register %rax Value just generated by ALU Forward from execute as valB

Forwarding Priority Multiple Forwarding Choices 0x000: irmovq $1, %rax 1 2 3 4 5 6 7 8 9 F D E M W 0x00a: irmovq $2, %rax 0x014: irmovq $3, %rax 0x01e: rrmovq %rax, %rdx 0x020: halt 10 # demo-priority.ys W R[ % rax ] f 3 1 D valA rdx = 10 valB ? Cycle 5 M 2 E Multiple Forwarding Choices Which one should have priority Match serial semantics Use matching value from earliest pipeline stage

Implementing Forwarding Add additional feedback paths from E, M, and W pipeline registers into decode stage Create logic blocks to select from multiple sources for valA and valB in decode stage

Implementing Forwarding ## What should be the A value? int d_valA = [ # Use incremented PC D_icode in { ICALL, IJXX } : D_valP; # Forward valE from execute d_srcA == e_dstE : e_valE; # Forward valM from memory d_srcA == M_dstM : m_valM; # Forward valE from memory d_srcA == M_dstE : M_valE; # Forward valM from write back d_srcA == W_dstM : W_valM; # Forward valE from write back d_srcA == W_dstE : W_valE; # Use value read from register file 1 : d_rvalA; ];

Limitation of Forwarding Load-use dependency Value needed by end of decode stage in cycle 7 Value read from memory in memory stage of cycle 8

Avoiding Load/Use Hazard Stall using instruction for one cycle Can then pick up loaded value by forwarding from memory stage

Detecting Load/Use Hazard Condition Trigger Load/Use Hazard E_icode in { IMRMOVQ, IPOPQ } && E_dstM in { d_srcA, d_srcB }

Control for Load/Use Hazard 0x000: irmovq $128,% rdx 1 2 3 4 5 6 7 8 9 F D E M W 0x00a: $3,% rcx 0x014: rmmovq % , 0(% ) 0x01e: $10,% ebx 0x028: mrmovq 0(% ), rax # Load % # demo - luh . ys 0x032: addq , # Use % 0x034: halt 10 11 bubble 12 Stall instructions in fetch and decode stages Inject bubble into execute stage Condition F D E M W Load/Use Hazard stall bubble normal

Branch Misprediction Example demo-j.ys 0x000: xorq %rax,%rax 0x002: jne t # Not taken 0x00b: irmovq $1, %rax # Fall through 0x015: nop 0x016: nop 0x017: nop 0x018: halt 0x019: t: irmovq $3, %rdx # Target 0x023: irmovq $4, %rcx # Should not execute 0x02d: irmovq $5, %rdx # Should not execute Should only execute first 8 instructions

Handling Misprediction Predict branch as taken Fetch 2 instructions at target Cancel when mispredicted Detect branch not-taken in execute stage On following cycle, replace instructions in execute and decode by bubbles No side effects have occurred yet

Detecting Mispredicted Branch Condition Trigger Mispredicted Branch E_icode = IJXX & !e_Cnd

Control for Misprediction Condition F D E M W Mispredicted Branch normal bubble

Return Example Previously executed three additional instructions demo-retb.ys 0x000: irmovq Stack,%rsp # Intialize stack pointer 0x00a: call p # Procedure call 0x013: irmovq $5,%rsi # Return point 0x01d: halt 0x020: .pos 0x20 0x020: p: irmovq $-1,%rdi # procedure 0x02a: ret 0x02b: irmovq $1,%rax # Should not be executed 0x035: irmovq $2,%rcx # Should not be executed 0x03f: irmovq $3,%rdx # Should not be executed 0x049: irmovq $4,%rbx # Should not be executed 0x100: .pos 0x100 0x100: Stack: # Stack: Stack pointer Previously executed three additional instructions

Correct Return Example # demo - retb 0x026: ret F D E M W bubble F D E M W bubble F D E M W bubble F D E M W 0x013: irmovq $5,% rsi # Return F F D D E E M M W W As ret passes through pipeline, stall at fetch stage While in decode, execute, and memory stage Inject bubble into decode stage Release stall when reach write-back stage W valM = 0x0b 0x013 • F F valC valC f f 5 5 rB rB f f % % esi rsi

Detecting Return Condition Trigger Processing ret IRET in { D_icode, E_icode, M_icode }

Control for Return Condition F D E M W Processing ret stall bubble # demo - retb 0x026: ret F D E M W bubble F D E M W bubble F D E M W bubble F D E M W 0x014: irmovq $5,% rsi # Return F F D D E E M M W W Condition F D E M W Processing ret stall bubble normal

Special Control Cases Detection Action (on next cycle) Condition Trigger Processing ret IRET in { D_icode, E_icode, M_icode } Load/Use Hazard E_icode in { IMRMOVQ, IPOPQ } && E_dstM in { d_srcA, d_srcB } Mispredicted Branch E_icode = IJXX & !e_Cnd Condition F D E M W Processing ret stall bubble normal Load/Use Hazard Mispredicted Branch

Implementing Pipeline Control Combinational logic generates pipeline control signals Action occurs at start of following cycle

Initial Version of Pipeline Control bool F_stall = # Conditions for a load/use hazard E_icode in { IMRMOVQ, IPOPQ } && E_dstM in { d_srcA, d_srcB } || # Stalling at fetch while ret passes through pipeline IRET in { D_icode, E_icode, M_icode }; bool D_stall = E_icode in { IMRMOVQ, IPOPQ } && E_dstM in { d_srcA, d_srcB }; bool D_bubble = # Mispredicted branch (E_icode == IJXX && !e_Cnd) || bool E_bubble = # Load/use hazard

Control Combinations Combination A Combination B Special cases that can arise on same clock cycle Combination A Not-taken branch ret instruction at branch target Combination B Instruction that reads from memory to %rsp Followed by ret instruction

Control Combination A Should handle as mispredicted branch JXX E D M Mispredict ret 1 Combination A Condition F D E M W Processing ret stall bubble normal Mispredicted Branch Combination Should handle as mispredicted branch Stalls F pipeline register But PC selection logic will be using M_valA anyhow

Control Combination B Load/use ret 1 1 M M M E Load E E D Use D ret ret ret Combination B Condition F D E M W Processing ret stall bubble normal Load/Use Hazard Combination bubble + stall Would attempt to bubble and stall pipeline register D Signaled by processor as pipeline error

Handling Control Combination B Load/use ret ret 1 1 M M E Load E D Use D D ret ret ret Combination B Condition F D E M W Processing ret stall bubble normal Load/Use Hazard Combination Load/use hazard should get priority ret instruction should be held in decode stage for additional cycle

Corrected Pipeline Control Logic bool D_bubble = # Mispredicted branch (E_icode == IJXX && !e_Cnd) || # Stalling at fetch while ret passes through pipeline IRET in { D_icode, E_icode, M_icode } # but not condition for a load/use hazard && !(E_icode in { IMRMOVQ, IPOPQ } && E_dstM in { d_srcA, d_srcB }); Condition F D E M W Processing ret stall bubble normal Load/Use Hazard Combination Load/use hazard should get priority ret instruction should be held in decode stage for additional cycle

Pipeline Summary Data Hazards Control Hazards Control Combinations Most handled by forwarding No performance penalty Load/use hazard requires one cycle stall Control Hazards Cancel instructions when detect mispredicted branch Two clock cycles wasted Stall fetch stage while ret passes through pipeline Three clock cycles wasted Control Combinations Must analyze carefully First version had subtle bug Only arises with unusual instruction combination