EECS 583 – Lecture 15 Machine Information, Scheduling a Basic Block University of Michigan March 5, 2003.

Slides:



Advertisements
Similar presentations
Target Code Generation
Advertisements

Hardware-Based Speculation. Exploiting More ILP Branch prediction reduces stalls but may not be sufficient to generate the desired amount of ILP One way.
CALTECH CS137 Fall DeHon 1 CS137: Electronic Design Automation Day 19: November 21, 2005 Scheduling Introduction.
ECE 667 Synthesis and Verification of Digital Circuits
Computer Organization and Architecture
EECS 583 – Class 13 Software Pipelining University of Michigan October 24, 2011.
Architecture-dependent optimizations Functional units, delay slots and dependency analysis.
Introduction to Data Flow Graphs and their Scheduling Sources: Gang Quan.
1 IF IDEX MEM L.D F4,0(R2) MUL.D F0, F4, F6 ADD.D F2, F0, F8 L.D F2, 0(R2) WB IF IDM1 MEM WBM2M3M4M5M6M7 stall.
1 Lecture 17: Basic Pipelining Today’s topics:  5-stage pipeline  Hazards and instruction scheduling Mid-term exam stats:  Highest: 90, Mean: 58.
ECE Synthesis & Verification - Lecture 2 1 ECE 697B (667) Spring 2006 ECE 697B (667) Spring 2006 Synthesis and Verification of Digital Circuits Scheduling.
Mary Jane Irwin ( ) [Adapted from Computer Organization and Design,
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
EECS 470 Pipeline Hazards Lecture 4 Coverage: Appendix A.
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
Improving Code Generation Honors Compilers April 16 th 2002.
Saman Amarasinghe ©MIT Fall 1998 Simple Machine Model Instructions are executed in sequence –Fetch, decode, execute, store results –One instruction.
Introduction to Data Flow Graphs and their Scheduling Sources: Gang Quan.
Pipelining. Overview Pipelining is widely used in modern processors. Pipelining improves system performance in terms of throughput. Pipelined organization.
Instruction Sets and Pipelining Cover basics of instruction set types and fundamental ideas of pipelining Later in the course we will go into more depth.
Lecture 15: Pipelining and Hazards CS 2011 Fall 2014, Dr. Rozier.
Fall 2002 Lecture 14: Instruction Scheduling. Saman Amarasinghe ©MIT Fall 1998 Outline Modern architectures Branch delay slots Introduction to.
EECS 583 – Class 14 Modulo Scheduling Reloaded University of Michigan October 31, 2011.
Pipeline Hazard CT101 – Computing Systems. Content Introduction to pipeline hazard Structural Hazard Data Hazard Control Hazard.
COSC 3430 L08 Basic MIPS Architecture.1 COSC 3430 Computer Architecture Lecture 08 Processors Single cycle Datapath PH 3: Sections
1 Pipelining Reconsider the data path we just did Each instruction takes from 3 to 5 clock cycles However, there are parts of hardware that are idle many.
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
CMPE 421 Parallel Computer Architecture
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
Processor Architecture
EECS 583 – Class 11 Instruction Scheduling University of Michigan October 12, 2011.
CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
Pipelining Example Laundry Example: Three Stages
LECTURE 10 Pipelining: Advanced ILP. EXCEPTIONS An exception, or interrupt, is an event other than regular transfers of control (branches, jumps, calls,
Instruction Scheduling Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved.
Scheduling Determines the precise start time of each task.
Computer Organization
ARM Organization and Implementation
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers The Processor
Pipeline Implementation (4.6)
Design of the Control Unit for Single-Cycle Instruction Execution
Pipelining: Advanced ILP
Morgan Kaufmann Publishers The Processor
Instruction Scheduling Hal Perkins Summer 2004
Instruction Scheduling Hal Perkins Winter 2008
How to improve (decrease) CPI
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
EECS 583 – Class 14 Modulo Scheduling Reloaded
Static Code Scheduling
EECS 583 – Class 11 Instruction Scheduling
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
EECS 583 – Class 12 Superblock Scheduling Software Pipelining Intro
EECS 583 – Class 10 Finish Optimization Intro. to Code Generation
Instruction Scheduling Hal Perkins Autumn 2005
Instruction encoding We’ve already seen some important aspects of processor design. A datapath contains an ALU, registers and memory. Programmers and compilers.
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
EECS 583 – Class 12 Superblock Scheduling Intro. to Loop Scheduling
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
EECS 583 – Class 12 Superblock Scheduling
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Target Code Generation
Instruction Scheduling Hal Perkins Autumn 2011
Pipelining Hazards.
Presentation transcript:

EECS 583 – Lecture 15 Machine Information, Scheduling a Basic Block University of Michigan March 5, 2003

- 1 - Machine Information v Each step of code generation requires knowledge of the machine »Hard code it? – used to be common practice »Retargetability, then cannot v What does the code generator need to know about the target processor? »Structural information?  No »For each opcode  What registers can be accessed as each of its operands  Other operand encoding limitations »Operation latencies  Read inputs, write outputs »Resources utilized  Which ones, when

- 2 - Machine Description (mdes) v Elcor mdes supports very general class of EPIC processors »Probably more general than you need »Weakness – Does not support ISA changes like GCC v Terminology »Generic opcode  Virtual opcode, machine supports k versions of it  ADD_W »Architecture opcode or unit specific opcode or sched opcode  Specific assembly operation of the processor  ADD_W.0 = add on function unit 0 v Each unit specific opcode has 3 properties »IO format »Latency »Resource usage

- 3 - IO Format v Registers, register files »Number, width, static or rotating »Read-only (hardwired 0) or read-write v Operation »Number of source/dests »Predicated or not »For each source/dest/pred  What register file(s) can be read/written  Literals, if so, how big Multicluster machine example: ADD_W.0gpr1, gpr1 : gpr1 ADD_W_L.0gpr1, lit6 : gpr1 ADD_W.1gpr2, gpr2 : gpr2

- 4 - Latency Information v Multiply takes 3 cycles »No, not that simple!!! v Differential input/output latencies »Earliest read latency for each source operand »Latest read latency for each source operand »Earliest write latency for each destination operand »Latest write latency for each destination operand v Why all this? »Unexpected events may make operands arrive late or be produced early v Compound op: part may finish early or start late v Instruction re-execution by »Exception handlers »Interupt handlers v Ex: mpyadd(d1, d2, s1, s2, s3) »d1 = s1 * s2, d2 = d1 + s3 s1s2s3 d1d s1: 0/2 s2: 0/2 s3: 2/2 d1: 2/3 d2: 2/4 E/L

- 5 - Memory Serialization Latency v Ensuring the proper ordering of dependent memory operations v Not the memory latency »But, point in the memory pipeline where 2 ops are guaranteed to be processed in sequential order v Page fault – memory op is re-executed, so need »Earliest mem serialization latency »Latest mem serialization latency v Remember »Compiler will use this, so any 2 memory ops that cannot be proven independent, must be separated by mem serialization latency.

- 6 - Branch Latency v Time relative to the initiation time of a branch at which the target of the branch is initiated v What about branch prediction? »Can reduce branch latency »But, may not make it 1 v We will assume branch latency is 1 for this class (ie no delay slots!) 0: branch 1: xxx 2: yyy 3: target branch latency = k (3) delay slots = k – 1 (2) Note xxx and yyy are multiOps Example:

- 7 - Resources v A machine resource is any aspect of the target processor for which over-subscription is possible if not explicitly managed by the compiler »Scheduler must pick conflict free combinations v 3 kinds of machine resources »Hardware resources are hardware entities that would be occupied or used during the execution of an opcode  Integer ALUS, pipeline stages, register ports, busses, etc. »Abstract resources are conceptual entities that are used to model operation conflicts or sharing constraints that do not directly correspond to any hardware resource  Sharing an instruction field »Counted resources are identical resources such that k are required to do something  Any 2 input busses

- 8 - Reservation Tables Res1 Res2 ALU MPY Resultbus relative time 0 1 Res1 Res2 ALU MPY Resultbus relative time 0 1 Res1 Res2 ALU MPY Resultbus relative time XX X X X X X XX X X Integer add Non-pipelined multiply Load, uses ALU for addr calculation, can’t issue load with add or multiply For each opcode, the resources used at each cycle relative to its initiation time are specified in the form of a table Res1, Res2 are abstract resources to model issue constraints

- 9 - Now, Lets Get Back to Scheduling… v Scheduling constraints »What limits the operations that can be concurrently executed or reordered? »Processor resources – modeled by mdes »Dependences between operations  Data, memory, control v Processor resources »Manage using resource usage map (RU_map) »When each resource will be used by already scheduled ops »Considering an operation at time t  See if each resource in reservation table is free »Schedule an operation at time t  Update RU_map by marking resources used by op busy

Data Dependences v Data dependences »If 2 operations access the same register, they are dependent »However, only keep dependences to most recent producer/consumer as other edges are redundant »Types of data dependences Flow OutputAnti r1 = r2 + r3 r4 = r1 * 6 r1 = r2 + r3 r1 = r4 * 6 r1 = r2 + r3 r2 = r5 * 6

More Dependences v Memory dependences »Similar as register, but through memory »Memory dependences may be certain or maybe v Control dependences »We discussed this earlier »Branch determines whether an operation is executed or not »Operation must execute after/before a branch »Note, control flow (C0) is not a dependence Mem-flow Mem-outputMem-anti store (r1, r2) r3 = load(r1) store (r1, r2) store (r1, r3) r2 = load(r1) store (r1, r3) Control (C1) if (r1 != 0) r2 = load(r1)

Dependence Graph v Represent dependences between operations in a block via a DAG »Nodes = operations »Edges = dependences v Single-pass traversal required to insert dependences v Example 1: r1 = load(r2) 2: r2 = r1 + r4 3: store (r4, r2) 4: p1 = cmpp (r2 < 0) 5: branch if p1 to BB3 6: store (r1, r2) BB3:

Dependence Edge Latencies v Edge latency = minimum number of cycles necessary between initiation of the predecessor and successor in order to satisfy the dependence v Register flow dependence, a  b »Latest_write(a) – Earliest_read(b) v Register anti dependence, a  b »Latest_read(a) – Earliest_write(b) + 1 v Register output dependence, a  b »Latest_write(a) – Earliest_write(b) + 1 v Negative latency »Possible, means successor can start before predecessor »We will only deal with latency >= 0, so MAX any latency with 0

Dependence Edge Latencies (2) v Memory dependences, a  b (all types, flow, anti, output) »latency = latest_serialization_latency(a) – earliest_serialization_latency(b) + 1 »Prioritized memory operations  Hardware orders memory ops by order in MultiOp  Latency can be 0 with this support v Control dependences »branch  b  Op b cannot issue until prior branch completed  latency = branch_latency »a  branch  Op a must be issued before the branch completes  latency = 1 – branch_latency (can be negative)  conservative, latency = MAX(0, 1-branch_latency)

Class Problem r1 = load(r2) r2 = r2 + 1 store (r8, r2) r3 = load(r2) r4 = r1 * r3 r5 = r5 + r4 r2 = r6 + 4 store (r2, r5) machine model min/max read/write latencies add: src 0/1 dst 1/1 mpy: src 0/2 dst 2/3 load: src 0/0 dst 2/2 sync 1/1 store: src 0/0 dst - sync 1/1 1. Draw dependence graph 2. Label edges with type and latencies

Dependence Graph Properties - Estart v Estart = earliest start time, (as soon as possible - ASAP) »Schedule length with infinite resources (dependence height) »Estart = 0 if node has no predecessors »Estart = MAX(Estart(pred) + latency) for each predecessor node »Example

Lstart v Lstart = latest start time, ALAP »Latest time a node can be scheduled s.t. sched length not increased beyond infinite resource schedule length »Lstart = Estart if node has no successors »Lstart = MIN(Lstart(succ) - latency) for each successor node »Example

Slack v Slack = measure of the scheduling freedom »Slack = Lstart – Estart for each node »Larger slack means more mobility »Example

Critical Path v Critical operations = Operations with slack = 0 »No mobility, cannot be delayed without extending the schedule length of the block »Critical path = sequence of critical operations from node with no predecessors to exit node, can be multiple crit paths

Class Problem NodeEstartLstartSlack Critical path(s) =

Operation Priority v Priority – Need a mechanism to decide which ops to schedule first (when you have multiple choices) v Common priority functions »Height – Distance from exit node  Give priority to amount of work left to do »Slackness – inversely proportional to slack  Give priority to ops on the critical path »Register use – priority to nodes with more source operands and fewer destination operands  Reduces number of live registers »Uncover – high priority to nodes with many children  Frees up more nodes »Original order – when all else fails

Height-Based Priority v Height-based is the most common »priority(op) = MaxLstart – Lstart(op) , 0 2, 22, 3 4, 4 6, 6 4, 77, 7 oppriority , , 1 0, 5 1 2

List Scheduling (Cycle Scheduler) v Build dependence graph, calculate priority v Add all ops to UNSCHEDULED set v time = -1 v while (UNSCHEDULED is not empty) »time++ »READY = UNSCHEDULED ops whose incoming dependences have been satisfied »Sort READY using priority function »For each op in READY (highest to lowest priority)  op can be scheduled at current time? (are the resources free?) u Yes, schedule it, op.issue_time = time âMark resources busy in RU_map relative to issue time âRemove op from UNSCHEDULED/READY sets u No, continue

Cycle Scheduling Example RU_map time ALU MEM op priority m 3m 5m , 0 2, 22, 3 4, 4 6, 6 4, 7 7, , 8 7m 1 0, 1 0, Machine: 2 issue, 1 memory port, 1 ALU Memory port = 2 cycles, non-pipelined ALU = 1 cycle

Cycle Scheduling Example (2) RU_map time ALU MEM m 3m 5m , 0 2, 22, 3 4, 4 6, 6 4, 7 7, , 8 7m 1 0, 1 0, Schedule time Ready Placed op priority

Cycle Scheduling Example (3) 2m 3m 5m , 0 2, 22, 3 4, 4 6, 6 4, 7 7, , 8 7m 1 0, 1 0, Schedule timeReadyPlaced 01,2,71, ,4,73, ,7,85, ,76, op priority

Class Problem 1m 2 Machine: 2 issue, 1 memory port, 1 ALU Memory port = 2 cycles, pipelined ALU = 1 cycle 2m 4m m Calculate height-based priorities 2.Schedule using cycle scheduler 0,1 2,3 3,5 3,4 4,4 2,2 0,0 0,4 5,5 6,6 1