– 1 – Chapter 4 Processor Architecture Pipelined Implementation Chapter 4 Processor Architecture Pipelined Implementation Instructor: Dr. Hyunyoung Lee.

Slides:



Advertisements
Similar presentations
Randal E. Bryant Carnegie Mellon University CS:APP CS:APP Chapter 4 Computer Architecture PipelinedImplementation Part II CS:APP Chapter 4 Computer Architecture.
Advertisements

Lecture 4: CPU Performance
COMP25212 Further Pipeline Issues. Cray 1 COMP25212 Designed in 1976 Cost $8,800,000 8MB Main Memory Max performance 160 MFLOPS Weight 5.5 Tons Power.
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
Pipeline Computer Organization II 1 Hazards Situations that prevent starting the next instruction in the next cycle Structural hazards – A required resource.
Pipelining I Topics Pipelining principles Pipeline overheads Pipeline registers and stages Systems I.
Chapter 8. Pipelining.
1 Seoul National University Wrap-Up. 2 Overview Seoul National University Wrap-Up of PIPE Design  Exception conditions  Performance analysis Modern.
Real-World Pipelines: Car Washes Idea  Divide process into independent stages  Move objects through stages in sequence  At any instant, multiple objects.
PipelinedImplementation Part I PipelinedImplementation.
Instructor: Erol Sahin
Computer Architecture Carnegie Mellon University
PipelinedImplementation Part I CSC 333. – 2 – Overview General Principles of Pipelining Goal Difficulties Creating a Pipelined Y86 Processor Rearranging.
L17 – Pipeline Issues 1 Comp 411 – Fall /1308 CPU Pipelining Issues Finishing up Chapter 6 This pipe stuff makes my head hurt! What have you been.
7/2/ _23 1 Pipelining ECE-445 Computer Organization Dr. Ron Hayne Electrical and Computer Engineering.
Appendix A Pipelining: Basic and Intermediate Concepts
Randal E. Bryant CS:APP Chapter 4 Computer Architecture SequentialImplementation CS:APP Chapter 4 Computer Architecture SequentialImplementation Slides.
Pipelining. Overview Pipelining is widely used in modern processors. Pipelining improves system performance in terms of throughput. Pipelined organization.
Datapath Design II Topics Control flow instructions Hardware for sequential machine (SEQ) Systems I.
Pipelining III Topics Hazard mitigation through pipeline forwarding Hardware support for forwarding Forwarding to mitigate control (branch) hazards Systems.
David O’Hallaron Carnegie Mellon University Processor Architecture PIPE: Pipelined Implementation Part I Processor Architecture PIPE: Pipelined Implementation.
Data Hazard Solution 2: Data Forwarding Our naïve pipeline would experience many data stalls  Register isn’t written until completion of write-back stage.
Lecture 15: Pipelining and Hazards CS 2011 Fall 2014, Dr. Rozier.
1 Seoul National University Pipelined Implementation : Part I.
1 Naïve Pipelined Implementation. 2 Outline General Principles of Pipelining –Goal –Difficulties Naïve PIPE Implementation Suggested Reading 4.4, 4.5.
University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell CS352H: Computer Systems Architecture Topic 8: MIPS Pipelined.
Pipelining (I). Pipelining Example  Laundry Example  Four students have one load of clothes each to wash, dry, fold, and put away  Washer takes 30.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
CMPE 421 Parallel Computer Architecture
Randal E. Bryant Carnegie Mellon University CS:APP CS:APP Chapter 4 Computer Architecture SequentialImplementation CS:APP Chapter 4 Computer Architecture.
Datapath Design I Topics Sequential instruction execution cycle Instruction mapping to hardware Instruction decoding Systems I.
Computer Architecture: Wrap-up CENG331 - Computer Organization Instructors: Murat Manguoglu(Section 1) Erol Sahin (Section 2 & 3) Adapted from slides of.
1 Sequential CPU Implementation. 2 Outline Logic design Organizing Processing into Stages SEQ timing Suggested Reading 4.2,4.3.1 ~
Pipeline Architecture I Slides from: Bryant & O’ Hallaron
PipelinedImplementation Part II PipelinedImplementation.
Computer Architecture: Pipelined Implementation - I
Computer Architecture adapted by Jason Fritts
Pipelining IV Topics Implementing pipeline control Pipelining and performance analysis Systems I.
Lecture 9. MIPS Processor Design – Pipelined Processor Design #1 Prof. Taeweon Suh Computer Science Education Korea University 2010 R&E Computer System.
1 SEQ CPU Implementation. 2 Outline SEQ Implementation Suggested Reading 4.3.1,
Randal E. Bryant Carnegie Mellon University CS:APP2e CS:APP Chapter 4 Computer Architecture PipelinedImplementation Part II CS:APP Chapter 4 Computer Architecture.
Sequential Hardware “God created the integers, all else is the work of man” Leopold Kronecker (He believed in the reduction of all mathematics to arguments.
Real-World Pipelines Idea –Divide process into independent stages –Move objects through stages in sequence –At any given times, multiple objects being.
1 Pipelined Implementation. 2 Outline Handle Control Hazard Handle Exception Performance Analysis Suggested Reading 4.5.
Real-World Pipelines Idea Divide process into independent stages
Systems I Pipelining IV
Lecture 14 Y86-64: PIPE – pipelined implementation
Administrivia Midterm to be posted on Tuesday after class
Morgan Kaufmann Publishers The Processor
Pipelined Implementation : Part I
Seoul National University
Seoul National University
Instruction Decoding Optional icode ifun valC Instruction Format
Pipelined Implementation : Part II
Systems I Pipelining III
Computer Architecture adapted by Jason Fritts
Systems I Pipelining II
Pipelined Implementation : Part I
Seoul National University
Systems I Pipelining I Topics Pipelining principles Pipeline overheads
Pipeline Architecture I Slides from: Bryant & O’ Hallaron
Pipelined Implementation : Part I
Pipelined Implementation
Pipelined Implementation
Systems I Pipelining II
Chapter 4 Processor Architecture
Systems I Pipelining II
Pipelined Implementation
Real-World Pipelines: Car Washes
Sequential Design תרגול 10.
Presentation transcript:

– 1 – Chapter 4 Processor Architecture Pipelined Implementation Chapter 4 Processor Architecture Pipelined Implementation Instructor: Dr. Hyunyoung Lee Texas A&M University Based on slides provided by Randal E. Bryant, CMU

– 2 – Overview General Principles of Pipelining Goal: Enhancing Performance Difficulties: Complex implementation, Non-deterministic behavior etc. Creating a Pipelined Y86 Processor Rearranging SEQ Inserting pipeline registers Problems with data and control hazards Data Hazards »Instruction having register R as source follows shortly after instruction having register R as destination »Common condition, don’t want to slow down pipeline Control Hazards »Mispredict conditional branch Our design predicts all branches as being taken Naïve pipeline executes two extra instructions »Getting return address for ret instruction Naïve pipeline executes three extra instructions

– 3 – Real-World Pipelines: Car Washes Idea Divide process into independent stages Move objects through stages in sequence At any given times, multiple objects being processed SequentialParallel Pipelined

– 4 – Computational Example System Computation requires total of 300 picoseconds Additional 20 picoseconds to save result in register Can must have clock cycle of at least 320 ps Throughput = 1/(320 ps) = instructions per ps = instructions per sec = 3.12 GIPS Combinational logic RegReg 300 ps20 ps Clock Delay = 320 ps Throughput = 3.12 GIPS

– 5 – 3-Way Pipelined Version System Divide combinational logic into 3 blocks of 100 ps each Can begin new operation as soon as previous one passes through stage A. Begin new operation every 120 ps Throughput increases by a factor of 8.33/3.12=2.67 at the expense of increased latency* (360/320=1.12) RegReg Clock Comb. logic A RegReg Comb. logic B RegReg Comb. logic C 100 ps20 ps100 ps20 ps100 ps20 ps Delay = 360 ps Throughput = 8.33 GIPS * Latency: total time required to perform a single instruction from beginning to end

– 6 – Pipeline Diagrams Unpipelined Cannot start new operation until previous one completes 3-Way Pipelined Up to 3 operations in process simultaneously Time OP1 OP2 OP3 Time ABC ABC ABC OP1 OP2 OP3

– 7 – Operating a Pipeline Time OP1 OP2 OP3 ABC ABC ABC Clock RegReg Comb. logic A RegReg Comb. logic B RegReg Comb. logic C 100 ps20 ps100 ps20 ps100 ps20 ps 239 RegReg Clock Comb. logic A RegReg Comb. logic B RegReg Comb. logic C 100 ps20 ps100 ps20 ps100 ps20 ps 241 RegReg RegReg RegReg 100 ps20 ps100 ps20 ps100 ps20 ps Comb. logic A Comb. logic B Comb. logic C Clock 300 RegReg Clock Comb. logic A RegReg Comb. logic B RegReg Comb. logic C 100 ps20 ps100 ps20 ps100 ps20 ps 359

– 8 – Limitations: Nonuniform Delays Throughput limited by slowest stage Other stages sit idle for much of the time Challenging to partition system into balanced stages RegReg Clock RegReg Comb. logic B RegReg Comb. logic C 50 ps20 ps150 ps20 ps100 ps20 ps Delay = 510 ps Throughput = 5.88 GOPS Comb. logic A Time OP1 OP2 OP3 ABC ABC ABC

– 9 – Limitations: Register Overhead As try to deepen pipeline, overhead of loading registers becomes more significant Percentage of clock cycle spent loading register: 1-stage pipeline: 6.25% 3-stage pipeline: 16.67% 6-stage pipeline: 28.57% High speeds of modern processor designs obtained through very deep pipelining Delay = 420 ps, Throughput = GOPSClock RegReg Comb. logic 50 ps20 ps RegReg Comb. logic 50 ps20 ps RegReg Comb. logic 50 ps20 ps RegReg Comb. logic 50 ps20 ps RegReg Comb. logic 50 ps20 ps RegReg Comb. logic 50 ps20 ps

– 10 – Data Hazards and Control Hazards Make the pipelined processor work! Data Hazards Instruction having register R as source follows shortly after instruction having register R as destination Common condition, don’t want to slow down pipeline Control Hazards Mispredict conditional branch Our design predicts all branches as being taken Naïve pipeline executes two extra instructions Getting return address for ret instruction Naïve pipeline executes three extra instructions

– 11 – Data Dependencies System Each operation depends on result from preceding one Clock Combinational logic RegReg Time OP1 OP2 OP3

– 12 – Data Hazards Result does not feed back around in time for next operation Pipelining has changed behavior of system RegReg Clock Comb. logic A RegReg Comb. logic B RegReg Comb. logic C Time OP1 OP2 OP3 ABC ABC ABC OP4 ABC

– 13 – Data Dependencies in Processors Result from one instruction used as operand for another Read-after-write (RAW) dependency Very common in actual programs Must make sure our pipeline handles these properly Get correct results Minimize performance impact 1 irmovl $50, %eax 2 addl %eax, %ebx 3 mrmovl 100( %ebx ), %edx

– 14 – SEQ Hardware Stages occur in sequence One operation in process at a time Key Blue boxes: predesigned hardware blocks E.g., memories, ALU Gray boxes: control logic Describe in HCL White ovals: labels for signals Thick lines: 32-bit word values Thin lines: 4-8 bit values Dotted lines: 1-bit values

– 15 – SEQ+ Hardware Still sequential implementation Reorder PC stage to put at beginning PC Stage Task is to select PC for current instruction Based on results computed by previous instruction Processor State PC is no longer stored in register But, can determine PC based on other stored information

– 16 – Adding Pipeline Registers

– 17 – Pipeline Stages Fetch Select current PC Read instruction Compute incremented PCDecode Read program registersExecute Operate ALUMemory Read or write data memory Write Back Update register file

– 18 – PIPE- Hardware Pipeline registers hold intermediate values from instruction execution Forward (Upward) Paths Values passed from one stage to next Cannot jump past stages e.g., valC passes through decode

– 19 – Feedback Paths Predicted PC Guess value of next PC Branch information Jump taken/not-taken Fall-through or target address Return point Read from memory Register updates To register file write ports

– 20 – Predicting the PC Start fetch of new instruction after current one has completed fetch stage Not enough time to reliably determine next instruction Guess which instruction will follow Recover if prediction was incorrect

– 21 – Our Prediction Strategy Instructions that don’t transfer control Predict next PC to be valP Always reliable Call and unconditional Jumps Predict next PC to be valC (destination) Always reliable Conditional jumps Predict next PC to be valC (destination) Only correct if branch is taken Typically right 60% of time Return instruction Don’t try to predict

– 22 – Recovering from PC Misprediction Mispredicted Jump Will see branch flag once instruction reaches memory stage Can get fall-through PC from valA Return Instruction Will get return PC when ret reaches write-back stage

– 23 – Pipeline Demonstration File: demo-basic.ys irmovl $1,%eax #I FDEM W irmovl $2,%ecx #I2 FDEM W irmovl $3,%edx #I3 FDEMW irmovl $4,%ebx #I4 FDEMW halt #I5 FDEMW Cycle 5 W I1 M I2 E I3 D I4 F I5

– 24 – Data Dependencies: 3 Nop’s

– 25 – Data Dependencies: 2 Nop’s

– 26 – Data Dependencies: 1 Nop

– 27 – Data Dependencies: No Nop 0x000:irmovl$10,%edx FDEM W 0x006:irmovl$3,%eax FDEM W FDEMW 0x00c:addl%edx,%eax FDEMW 0x00e: halt # demo-h0.ys E D valA  R[ %edx ]=0 valB  R[ %eax ]=0 D valA  R[ %edx ]=0 valB  R[ %eax ]=0 Cycle 4 Error M M_valE= 10 M_dstE= %edx e_valE  = 3 E_dstE= %eax

– 28 – Stalling for Data Dependencies If instruction follows too closely after one that writes register, slow it down Hold instruction in decode Dynamically inject nop into execute stage 0x000: irmovl $10,%edx FDEMW 0x006: irmovl $3,%eax FDEMW 0x00c: nop FDEMW bubble F EMW 0x00e: addl %edx,%eax DDEMW 0x010: halt FDEMW 10 # demo-h2.ys F FDEMW 0x00d: nop 11

– 29 – Stall Condition Source Registers srcA and srcB of current instruction in decode stage Destination Registers dstE and dstM fields Instructions in execute, memory, and write-back stages Special Case Don’t stall for register ID 15 (0xF) Indicates absence of register operand Don’t stall for failed conditional move

– 30 – Detecting Stall Condition 0x000: irmovl $10,%edx FDEMW 0x006: irmovl $3,%eax FDEMW 0x00c: nop FDEMW bubble F EMW 0x00e: addl %edx,%eax DDEMW 0x010: halt FDEMW 10 # demo-h2.ys F FDEMW 0x00d: nop 11 Cycle 6 W D W_dstE = %eax W_valE = 3 srcA = %edx srcB = %eax

– 31 – Stalling X3 0x000: irmovl $10,%edx FDEMW 0x006: irmovl $3,%eax FDEMW bubble F EMW D EMW 0x00c: addl %edx,%eax DDEMW 0x00e: halt FDEMW 10 # demo-h0.ys FF D F EMW bubble 11 Cycle 4 W W_dstE = %eax D srcA = %edx srcB = %eax M M_dstE = %eax D srcA = %edx srcB = %eax E E_dstE = %eax D srcA = %edx srcB = %eax Cycle 5 Cycle 6

– 32 – What Happens When Stalling? Stalling instruction held back in decode stage Following instruction stays in fetch stage Bubbles injected into execute stage Like dynamically generated nop’s Move through later stages 0x000: irmovl $10,%edx 0x006: irmovl $3,%eax 0x00c: addl %edx,%eax Cycle 4 0x00e: halt 0x000: irmovl $10,%edx 0x006: irmovl $3,%eax 0x00c: addl %edx,%eax # demo-h0.ys 0x00e: halt 0x000: irmovl $10,%edx 0x006: irmovl $3,%eax bubble 0x00c: addl %edx,%eax Cycle 5 0x00e: halt 0x006: irmovl $3,%eax bubble 0x00c: addl %edx,%eax bubble Cycle 6 0x00e: halt bubble 0x00c: addl %edx,%eax bubble Cycle 7 0x00e: halt bubble Cycle 8 0x00c: addl %edx,%eax 0x00e: halt Write Back Memory Execute Decode Fetch

– 33 – Implementing Stalling Pipeline Control Combinational logic detects stall condition Sets mode signals for how pipeline registers should update E M W F D CC rB srcA srcB icodevalEvalMdstEdstM CndicodevalEvalAdstEdstM icodeifunvalCvalAvalBdstEdstMsrcAsrcB valCvalPicodeifunrA predPC d_srcB d_srcA e_Cnd D_icode E_icode M_icode E_dstM Pipe control logic D_bubble D_stall E_bubble F_stall M_bubble W_stall set_cc stat W_stat stat m_stat

– 34 – Pipeline Register Modes Rising clock Rising clock  Output = y yy Rising clock Rising clock  Output = x xx xx n o p Rising clock Rising clock  Output = nop Output = xInput = y stall = 0 bubble = 0 xx Normal Output = xInput = y stall = 1 bubble = 0 xx Stall Output = xInput = y stall = 0 bubble = 1 Bubble

– 35 – Data Forwarding Naïve Pipeline Register isn’t written until completion of write-back stage Source operands read from register file in decode stage Needs to be in register file at start of stageObservation Value generated in execute or memory stageTrick Pass value directly from generating instruction to decode stage Needs to be available at end of decode stage

– 36 – Data Forwarding Example irmovl in write- back stage Destination value in W pipeline register Forward as valB for decode stage

– 37 – Bypass Paths Decode Stage Forwarding logic selects valA and valB Normally from register file Forwarding: get valA or valB from later pipeline stage Forwarding Sources Execute: valE Memory: valE, valM Write back: valE, valM

– 38 – Data Forwarding Example #2 Register %edx Generated by ALU during previous cycle Forward from memory as valA Register %eax Value just generated by ALU Forward from execute as valB

– 39 – Multiple Forwarding Choices Which one should have priority Match serial semantics Use matching value from earliest pipeline stage 0x000: irmovl $1, %eax FDEMWFDEMW 0x006: irmovl $2, %eax FDEMWFDEMW 0x00c: irmovl $3, %eax FDEMWFDEMW 0x012: rrmovl %eax, %edx FDEMWFDEMW 0x014: halt FDEMWFDEMW 10 # demo-priority.ys W R[ %eax ]  3 W R[ %eax ]  1 D valA  R[ %edx ]=10 valB  R[ %eax ]=0 D valA  R[ %edx ]=10 valB  R[ D valA  R[ %eax ]=? valB  0 Cycle 5 W R[ %eax ]  3 M R[ %eax ]  2 W R[ %eax ]  3 E R[ %eax ]  3 Forwarding Priority

– 40 – Limitation of Forwarding Load-use dependency Value needed by end of decode stage in cycle 7 Value read from memory in memory stage of cycle 8

– 41 – Avoiding Load/Use Hazard Stall using instruction for one cycle Can then pick up loaded value by forwarding from memory stage

– 42 – Detecting Load/Use Hazard ConditionTrigger Load/Use Hazard E_icode in { IMRMOVL, IPOPL } && E_dstM in { d_srcA, d_srcB }

– 43 – Control for Load/Use Hazard Stall instructions in fetch and decode stages Inject bubble into execute stageConditionFDEMW Load/Use Hazard stallstallbubblenormalnormal

– 44 – Branch Misprediction Example Should only execute first 7 instructions 0x000: xorl %eax,%eax 0x002: jne t # Not taken 0x007: irmovl $1, %eax # Fall through 0x00d: nop 0x00e: nop 0x00f: nop 0x010: halt 0x011: t: irmovl $3, %edx # Target (Should not execute) 0x017: irmovl $4, %ecx # Should not execute 0x01d: irmovl $5, %edx # Should not execute demo-j.ys

– 45 – Branch Misprediction Trace Incorrectly execute two instructions at branch target cancelled

– 46 – 0x000: irmovl Stack,%esp # Intialize stack pointer 0x006: nop # Avoid hazard on %esp 0x007: nop 0x008: nop 0x009: call p # Procedure call 0x00e: irmovl $5,%esi # Return point 0x014: halt 0x020:.pos 0x20 0x020: p: nop# procedure 0x021: nop 0x022: nop 0x023: ret 0x024: irmovl $1,%eax # Should not be executed 0x02a: irmovl $2,%ecx # Should not be executed 0x030: irmovl $3,%edx # Should not be executed 0x036: irmovl $4,%ebx # Should not be executed 0x100:.pos 0x100 0x100: Stack: # Stack: Stack pointer Return Example Require lots of nops to avoid data hazards demo-ret.ys

– 47 – Incorrect Return Example Incorrectly execute 3 instructions following ret

– 48 – Pipeline Summary (1) Concept Break instruction execution into 5 stages Run instructions through in pipelined modeLimitations Can’t handle dependencies between instructions when instructions follow too closely Data dependencies One instruction writes register, later one reads it Control dependency Instruction sets PC in way that pipeline did not predict correctly Mispredicted branch and return

– 49 – Pipeline Summary (2) Fixing the Pipeline Data Hazards Most handled by forwarding »No performance penalty Load/use hazard requires one cycle stall Control Hazards Cancel instructions when detect mispredicted branch »Two clock cycles wasted Stall fetch stage while ret passes through pipeline »Three clock cycles wasted