COMPSYS 304 Computer Architecture Speculation & Branching Morning visitors - Paradise Bay, Bay of Islands.

Slides:



Advertisements
Similar presentations
Branch prediction Titov Alexander MDSP November, 2009.
Advertisements

Computer Organization and Architecture
CSCI 4717/5717 Computer Architecture
Anshul Kumar, CSE IITD CSL718 : VLIW - Software Driven ILP Hardware Support for Exposing ILP at Compile Time 3rd Apr, 2006.
Lecture 8 Dynamic Branch Prediction, Superscalar and VLIW Advanced Computer Architecture COE 501.
Pipelining and Control Hazards Oct
Dynamic Branch PredictionCS510 Computer ArchitecturesLecture Lecture 10 Dynamic Branch Prediction, Superscalar, VLIW, and Software Pipelining.
Pipeline Computer Organization II 1 Hazards Situations that prevent starting the next instruction in the next cycle Structural hazards – A required resource.
Chapter 4 The Processor CprE 381 Computer Organization and Assembly Level Programming, Fall 2013 Zhao Zhang Iowa State University Revised from original.
Pipeline Hazards Pipeline hazards These are situations that inhibit that the next instruction can be processed in the next stage of the pipeline. This.
Computer Organization and Architecture
Computer Architecture Computer Architecture Processing of control transfer instructions, part I Ola Flygt Växjö University
8 Processing of control transfer instructions TECH Computer Science 8.1 Introduction 8.2 Basic approaches to branch handling 8.3 Delayed branching 8.4.
Chapter 8. Pipelining. Instruction Hazards Overview Whenever the stream of instructions supplied by the instruction fetch unit is interrupted, the pipeline.
Computer Organization and Architecture
Computer Organization and Architecture
Computer Organization and Architecture The CPU Structure.
Chapter 12 Pipelining Strategies Performance Hazards.
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
1 Chapter Six - 2nd Half Pipelined Processor Forwarding, Hazards, Branching EE3055 Web:
Goal: Reduce the Penalty of Control Hazards
Chapter 12 CPU Structure and Function. Example Register Organizations.
Pipelined Datapath and Control (Lecture #15) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer.
Pipelining. Overview Pipelining is widely used in modern processors. Pipelining improves system performance in terms of throughput. Pipelined organization.
Group 5 Alain J. Percial Paula A. Ortiz Francis X. Ruiz.
CH12 CPU Structure and Function
5-Stage Pipelining Fetch Instruction (FI) Fetch Operand (FO) Decode Instruction (DI) Write Operand (WO) Execution Instruction (EI) S3S3 S4S4 S1S1 S2S2.
CPU Design and PipeliningCSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: CPU Operations and Pipelining Reading: Stallings,
Presented by: Sergio Ospina Qing Gao. Contents ♦ 12.1 Processor Organization ♦ 12.2 Register Organization ♦ 12.3 Instruction Cycle ♦ 12.4 Instruction.
COMPUTER ARCHITECTURE Assoc.Prof. Stasys Maciulevičius Computer Dept.
1 Pipelining Reconsider the data path we just did Each instruction takes from 3 to 5 clock cycles However, there are parts of hardware that are idle many.
Chapter 8 Pipelining. A strategy for employing parallelism to achieve better performance Taking the “assembly line” approach to fetching and executing.
CSCI 6461: Computer Architecture Branch Prediction Instructor: M. Lancaster Corresponding to Hennessey and Patterson Fifth Edition Section 3.3 and Part.
CSCE 614 Fall Hardware-Based Speculation As more instruction-level parallelism is exploited, maintaining control dependences becomes an increasing.
CS 352 : Computer Organization and Design University of Wisconsin-Eau Claire Dan Ernst Pipelining Basics.
1 COMP541 Pipelined MIPS Montek Singh Mar 30, 2010.
Branch.1 10/14 Branch Prediction Static, Dynamic Branch prediction techniques.
Chapter 6 Pipelined CPU Design. Spring 2005 ELEC 5200/6200 From Patterson/Hennessey Slides Pipelined operation – laundry analogy Text Fig. 6.1.
Superscalar - summary Superscalar machines have multiple functional units (FUs) eg 2 x integer ALU, 1 x FPU, 1 x branch, 1 x load/store Requires complex.
1  1998 Morgan Kaufmann Publishers Chapter Six. 2  1998 Morgan Kaufmann Publishers Pipelining Improve perfomance by increasing instruction throughput.
CPU Design and Pipelining – Page 1CSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: CPU Operations and Pipelining Reading:
Adapted from Computer Organization and Design, Patterson & Hennessy, UCB ECE232: Hardware Organization and Design Part 13: Branch prediction (Chapter 4/6)
COMPSYS 304 Computer Architecture Speculation & Branching Morning visitors - Paradise Bay, Bay of Islands.
SOFTENG 363 Computer Architecture Cache John Morris ECE/CS, The University of Auckland Iolanthe I at 13 knots on Cockburn Sound, WA.
COMPSYS 304 Computer Architecture Cache John Morris Electrical & Computer Enginering/ Computer Science, The University of Auckland Iolanthe at 13 knots.
Chapter Six.
CS 352H: Computer Systems Architecture
Computer Architecture Chapter (14): Processor Structure and Function
Computer Organization CS224
CS203 – Advanced Computer Architecture
Dynamic Branch Prediction
PowerPC 604 Superscalar Microprocessor
Pipeline Implementation (4.6)
Flow Path Model of Superscalars
Introduction to Pentium Processor
Pipelining: Advanced ILP
The processor: Pipelining and Branching
Lecture: Static ILP, Branch Prediction
Branch statistics Branches occur every 4-6 instructions (16-25%) in integer programs; somewhat less frequently in scientific ones Unconditional branches.
Chapter Six.
Chapter Six.
Control unit extension for data hazards
CSC3050 – Computer Architecture
Dynamic Hardware Prediction
Control unit extension for data hazards
Presentation transcript:

COMPSYS 304 Computer Architecture Speculation & Branching Morning visitors - Paradise Bay, Bay of Islands

Speculation High Tech Gambling? Data Prefetch Cache instruction dcbt : data cache block touch Attempts to bring data into cache so that it will be “close” when needed Allows SIU to use idle bus bandwidth if there’s no spare bandwidth, this read can be given low priority Speculative because a branch may occur before it’s used we speculate that this data may be needed PowerPC mnemonic - Similar opcodes found in other architectures: SPARC v9, MIPS, …

Speculation - General Some functional units almost always idle Make them do some (possibly useful) work rather than idle If the speculation was incorrect, results are simply abandoned No loss in efficiency; Chance of a gain Researchers are actively looking at software prefetch schemes Fetch data well before it’s needed Reduce latency when it’s actually needed Speculative operations have low priority and use idle resources

Branching Expensive 2-3 cycles lost in pipeline All instructions following branch ‘flushed’ Bandwidth wasted fetching unused instructions Stall while branch target is fetched We can speculate about the target of a branch Terminology Branch Target : address to which branch jumps Branch Taken : control transfers to non- sequential address (target) Branch Not Taken : next instruction is executed

Branching - Prediction Branches can be unconditional: branch is always taken call subroutine return from subroutine conditional: branch depends on state of computation, eg has loop terminated yet? Unconditional branches are simple New instructions are fetched as soon as the branch is recognized As early in the pipeline as possible Branch units often placed with fetch & decode stages

Branching - Branch Unit PowerPC 603 logical layout

Branching - Speculation We have the following code: if ( cond ) s1; else s2; Superscalar machine Multiple functional units Start executing both branches ( s1 and s2 ) Keep idle functional units busy! One is speculative and will be abandoned Processor will eventually calculate the branch condition and select which result should be retained (written back) MIPS R up to 4 speculative at once

Branching - Speculation MIPS R Up to 4 speculative at once Instructions are “tagged” with a 4 bit mask Indicates to which branch instruction it belongs As soon as condition is determined, mis-predicted instructions are aborted

Branching - Prediction We have a sequence of instructions: add lw sub brne L1 or st ?If you were asked to guess which branch should be preferred, which would you choose: ?Next sequential instruction ( L2 ) ?Branch target ( L1 ) L2 L1 Some mixture of arithmetic, load, store, etc, instructions branch on some condition Some more arithmetic, load, store, etc, instructions

Branching - Prediction Studies show that backward branches are taken most of the time! Because of loops: add;any mix of arith, lw;load, store, etc, sub;instructions brne L1 ;branch back to loop start or ;some more arith, st ;memory, etc instructions L2 L1

Branching - Prediction Rule A simple prediction rule: Take backward branches works amazingly well! For a loop with n iterations, this is wrong in 1/n cases only! A system working on this rule alone would detect the backward branch and start fetching from the branch target rather than the next instruction

Branching - Improving the prediction Static prediction systems Compiler can mark branches Likely to be taken or not Instruction fetch unit will use the marking as advice on which instruction to fetch Compiler often able to give the right advice Loops are easily detected Other patterns in conditions can be recognized Checking for EOF when reading a file Error checking

Branching - Improving the prediction Dynamic prediction systems Program history determines most likely branch Branch Target Buffers - Another cache!

Branching - Branch Target Buffer Instruction Add[11:3] selects BTB entry Tag determines “hit” Stats select taken/not taken Pentium 4 >91% prediction accuracy - 4K entry BHT (Branch History Table) G4e – 2K entries

Branching - Branch Target Buffer BTB – just another cache Works on temporal locality principle If this branch is taken (not taken) now, it’s likely to be taken (not taken) next time Replace on conflicts (newest is best) Any cache organization could be used Direct mapped, associative, set-associative No write-back needed Flushed entries are restored Major difference from other caches Status bits …………

Branching - Branch Target Buffer Status bits Provide hysteresis in behaviour Without hysteresis, behaviour change would cause the prediction to immediately update Example: If ( cond ) s1 else s2 If the program takes branch s1 a few times, the BTB will predict that s1 is more likely than s2 If s2 is then taken, usual cache behaviour suggests that the prediction should be updated to s2 but Program branching behaviour is a little different ….

Branching - Branch Target Buffer Status bits Common branch behaviour is like this List of taken branches: s1 s1 s1 s1 s1 s2 s1 s1 s1 s2 s1 … Usually s1 is executed, occasionally s2 eg s2 handles errors s2 follows a loop ‘Standard’ cache update policies (assume the most recent will used next) would update the prediction from s1 to s2 immediately This would cause many mis-predictions

Branching - Branch Target Buffer Status bits However, if the BTB waits until it has seen s2 a number of times before changing the prediction, the previous stream is predicted well So the status bits (say 2 bits) are a count of the number of correct predictions A correct prediction updates the count (maybe saturating at 2 – ie counts to max 2) A mis-prediction decrements the count A mis-prediction and count=0 updates the prediction This accommodates an occasional break from a pattern (eg s1 is usually taken) without disturbing the best prediction (take s1 ) It also handles situations where behaviour changes sometimes

Branching - Branch Target Buffer Status bits - Count correct predictions Handles situations where behaviour changes sometimes Programs which move from one ‘region’ to another.. eg Image processing code - looking for an orange object Process background (non-orange) pixels, finds the orange thing, counts orange pixels for a while, then reverts back to background // search for orange object in row of pixels for(j=0;j<width;j++) { if ( pixel[j].colour != orange ) // s1 bg_cnt++; else { // s2 o_cnt++; if ( o_cnt > obj_width ) … // found it! }

Branching - Branch Target Buffer Status bits Count correct predictions Handles situations where behaviour changes sometimes Programs which move from one ‘region’ to another.. Example: Image processing program looking for an orange object Process background (non-orange) pixels, finds the orange thing, counts orange pixels for a while, then reverts back to background List of taken branches: Taken branches: s1 s1 s1 s2 s2 s2 … s2 s1 s1 s1 s1 Region: BG BG BG OR OR OR … OR BG BG BG BG Prediction: s1 s1 s1 s1 s1 s2 … s2 s2 s2 s1 s1 Correct:       …     

Branching - Branch Target Buffer Status bits Count correct predictions Reasonable compromise behaviour for most situations Tolerates an occasional ‘error’ branch well Changes to a new behaviour with a small delay Typically about 90% correct predictions BTB with 2k – 4k entries

Speculation & Branching - Summary Data speculation Try to bring data ‘closer’ to CPU (ie into cache) before needed Reduce memory access latency Techniques Special ‘touch’ instructions Advice to processor – fetch if resources available Software eg Dummy reference Instruction (Branch) speculation..

Speculation & Branching - Summary Branches are expensive!! Instruction (Branch) speculation Execute both branches of a conditional branch ‘Squash’ (abandon) results from wrong branch when branch condition eventually evaluated Compiler can also mark most probable branch Branch prediction Simplest rule: take backward branches Branch Target Buffer Cache containing most recent branch target ‘Standard’ cache, except for Status bits Introduce hysteresis into behaviour Only update branch target when it’s definitely the right choice

Superscalar - summary Superscalar machines have multiple functional units (FUs) eg 2 x integer ALU, 1 x FPU, 1 x branch, 1 x load/store Requires complex IFU Able to issue multiple instructions/cycle (typ 4) Able to detect hazards (unavailability of operands) Able to re-order instruction issue Aim to keep all the FUs busy Typically, 6-way superscalars can achieve instruction level parallelism of 2-3