Computer Architecture Instruction-Level Parallel Processors

Slides:



Advertisements
Similar presentations
Computer Organization and Architecture
Advertisements

Computer architecture
CSCI 4717/5717 Computer Architecture
CS 378 Programming for Performance Single-Thread Performance: Compiler Scheduling for Pipelines Adopted from Siddhartha Chatterjee Spring 2009.
CS136, Advanced Architecture Limits to ILP Simultaneous Multithreading.
ILP: IntroductionCSCE430/830 Instruction-level parallelism: Introduction CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng.
Instruction-Level Parallel Processors {Objective: executing two or more instructions in parallel} 4.1 Evolution and overview of ILP-processors 4.2 Dependencies.
CSC 370 (Blum)1 Instruction-Level Parallelism. CSC 370 (Blum)2 Instruction-Level Parallelism Instruction-level Parallelism (ILP) is when a processor has.
Anshul Kumar, CSE IITD CSL718 : VLIW - Software Driven ILP Hardware Support for Exposing ILP at Compile Time 3rd Apr, 2006.
Instruction-Level Parallelism compiler techniques and branch prediction prepared and Instructed by Shmuel Wimer Eng. Faculty, Bar-Ilan University March.
Superscalar processors Review. Dependence graph S1S2 Nodes: instructions Edges: ordered relations among the instructions Any ordering-based transformation.
CPE 631: ILP, Static Exploitation Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic,
COMP4611 Tutorial 6 Instruction Level Parallelism
POLITECNICO DI MILANO Parallelism in wonderland: are you ready to see how deep the rabbit hole goes? ILP: VLIW Architectures Marco D. Santambrogio:
Optimizing single thread performance Dependence Loop transformations.
EECC551 - Shaaban #1 Fall 2003 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
1 COMP 740: Computer Architecture and Implementation Montek Singh Tue, Feb 24, 2009 Topic: Instruction-Level Parallelism IV (Software Approaches/Compiler.
COMP25212 Advanced Pipelining Out of Order Processors.
Dynamic Branch PredictionCS510 Computer ArchitecturesLecture Lecture 10 Dynamic Branch Prediction, Superscalar, VLIW, and Software Pipelining.
1 Advanced Computer Architecture Limits to ILP Lecture 3.
Forwarding and Hazards MemberRole William ElliottTeam Leader Jessica Tyler ShulerWiki Specialist Tyler KimseyLead Engineer Cameron CarrollEngineer Danielle.
Datorteknik F1 bild 1 Instruction Level Parallelism Scalar-processors –the model so far SuperScalar –multiple execution units in parallel VLIW –multiple.
Computer Architecture Computer Architecture Processing of control transfer instructions, part I Ola Flygt Växjö University
Chapter 8. Pipelining. Instruction Hazards Overview Whenever the stream of instructions supplied by the instruction fetch unit is interrupted, the pipeline.
Limits on ILP. Achieving Parallelism Techniques – Scoreboarding / Tomasulo’s Algorithm – Pipelining – Speculation – Branch Prediction But how much more.
1 IF IDEX MEM L.D F4,0(R2) MUL.D F0, F4, F6 ADD.D F2, F0, F8 L.D F2, 0(R2) WB IF IDM1 MEM WBM2M3M4M5M6M7 stall.
Computer Architecture 2011 – Out-Of-Order Execution 1 Computer Architecture Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
Chapter 3 Instruction-Level Parallelism and Its Dynamic Exploitation – Concepts 吳俊興 高雄大學資訊工程學系 October 2004 EEF011 Computer Architecture 計算機結構.
Instruction Level Parallelism (ILP) Colin Stevens.
EECC551 - Shaaban #1 Winter 2002 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
EECC551 - Shaaban #1 Spring 2006 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
EECC551 - Shaaban #1 Fall 2005 lec# Pipelining and Instruction-Level Parallelism. Definition of basic instruction block Increasing Instruction-Level.
Computer Architecture 2011 – out-of-order execution (lec 7) 1 Computer Architecture Out-of-order execution By Dan Tsafrir, 11/4/2011 Presentation based.
Multiscalar processors
EECC551 - Shaaban #1 Spring 2004 lec# Definition of basic instruction blocks Increasing Instruction-Level Parallelism & Size of Basic Blocks.
Computer Architecture 2010 – Out-Of-Order Execution 1 Computer Architecture Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
CS 211: Computer Architecture Lecture 6 Module 2 Exploiting Instruction Level Parallelism with Software Approaches Instructor: Morris Lancaster.
CIS 662 – Computer Architecture – Fall Class 16 – 11/09/04 1 Compiler Techniques for ILP  So far we have explored dynamic hardware techniques for.
ECE 4100/6100 Advanced Computer Architecture Lecture 2 Instruction-Level Parallelism (ILP) Prof. Hsien-Hsin Sean Lee School of Electrical and Computer.
Recap Multicycle Operations –MIPS Floating Point Putting It All Together: the MIPS R4000 Pipeline.
Lecture 1: Introduction Instruction Level Parallelism & Processor Architectures.
Out-of-order execution Lihu Rappoport 11/ MAMAS – Computer Architecture Out-Of-Order Execution Dr. Lihu Rappoport.
现代计算机体系结构 主讲教师:张钢天津大学计算机学院 2009 年.
CS 352H: Computer Systems Architecture
Dynamic Scheduling Why go out of style?
Computer Architecture Principles Dr. Mike Frank
Concepts and Challenges
William Stallings Computer Organization and Architecture 8th Edition
William Stallings Computer Organization and Architecture 8th Edition
CS203 – Advanced Computer Architecture
Chapter 14 Instruction Level Parallelism and Superscalar Processors
Instruction Level Parallelism and Superscalar Processors
Lecture 8: ILP and Speculation Contd. Chapter 2, Sections 2. 6, 2
Siddhartha Chatterjee Spring 2008
How to improve (decrease) CPI
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Instruction Level Parallelism (ILP)
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
CSC3050 – Computer Architecture
Dynamic Hardware Prediction
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
How to improve (decrease) CPI
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Instruction Level Parallelism
Presentation transcript:

Computer Architecture Instruction-Level Parallel Processors Ola Flygt Växjö University http://w3.msi.vxu.se/users/ofl/ Ola.Flygt@msi.vxu.se +46 470 70 86 49

Outline {Objective: executing two or more instructions in parallel} Evolution and overview of ILP-processors Dependencies between instructions Instruction scheduling Preserving sequential consistency The speed-up potential of ILP-processing CH04

Improve CPU performance by increasing clock rates (CPU running at even higher frequencies.) increasing the number of instructions to be executed in parallel (executing 6-10 instructions at the same time) What is the limit for these?

How do we increase the number of instructions to be executed?

Time and Space parallelism

Pipeline (assembly line)

Result of pipeline (e.g.)

VLIW (very long instruction word,1024 bits!)

Superscalar (sequential stream of instructions)

From Sequential instructions to parallel execution Dependencies between instructions Instruction scheduling Preserving sequential consistency

Dependencies between instructions Instructions often depend on each other in such a way that a particular instruction cannot be executed until a preceding instruction or even two or three preceding instructions have been executed. 1 Data dependencies 2 Control dependencies 3 Resource dependencies

Data dependencies Read after Write (RAW) Write after Read (WAR) Write after Write (WAW) Recurrences

Data dependencies in straight-line code (RAW) RAW dependencies i1: load r1, a r2: add r2, r1, r1 flow dependencies true dependencies cannot be abandoned

Data dependencies in straight-line code (WAR) WAR dependencies i1: mul r1, r2, r3 r2: add r2, r4, r5 anti-dependencies false dependencies can be eliminated through register renaming r2: add r6, r4, r5 by using compiler or ILP-processor

Data dependencies in straight-line code (WAW) WAW dependencies i1: mul r1, r2, r3 r2: add r1, r4, r5 output dependencies false dependencies can be eliminated through register renaming r2: add r6, r4, r5 by using compiler or ILP-processor

Data dependencies in loops (recurrences) for (int i=2; i<10; i++) { x[i] = a*x[i-1] + b } cannot be executed in parallel

Data dependency graphs i1: load r1, a; i2: load r2, b; i3: add r3, r1, r2; RAW -> t i4: mul r1, r2, r4; WAR -> a i5: div r1, r2, r4; WAW -> o i1 i2 i3 i4 i5 t a o

Control dependencies mul r1, r2, r3 jz zproc : zproc: load r1, x actual path of execution depends on the outcome of multiplication impose dependencies on the logical subsequent instructions

Control Dependency Graph

Branches: Frequency and branch distance Expected frequency of (all) branch general-purpose programs (non scientific): 20-30% scientific programs: 5-10% Expected frequency of conditional branch general-purpose programs: 20% Expected branch distance (between two branch instr.) general-purpose programs: every 3rd-5th instruction, on average, to be a conditional branch scientific programs: every 10th-20th instruction, on average, to be a conditional branch

Impact of Branch on instruction issue

Resource dependencies An instruction is resource-dependent on a previously issued instruction if it requires a hardware resource which is still being used by a previously issued instruction. e.g. div r1, r2, r3 div r4, r2, r5

Instruction scheduling scheduling or arranging two or more instructions to be executed in parallel Need to detect code dependency (detection) Need to remove false dependency (resolution) a means to extract parallelism instruction-level parallelism which is implicit, is made explicit Two basic approaches Static: done by compiler Dynamic: done by processor

Instruction Scheduling:

ILP-instruction scheduling

Preserving sequential consistency care must be taken to maintain the logical integrity of the program execution parallel execution mimics sequential execution as far as the logical integrity of program execution is concerned e.g. (The jump instr. should depend on the add even if the div instr. takes longer time to execute) div r1, r2, r3; add r5, r6, r7; jz somewhere;

Concept sequential consistency

The speed-up potential of ILP-processing Parallel instruction execution may be restricted by data, control and resource dependencies Potential speed-up when parallel instruction execution is restricted by true data and control dependencies: general purpose programs: about 2 scientific programs: about 2-4 Why are the speed-up figures so low? basic block (a low-efficiency method used to extract parallelism)

Basic Block is a straight-line code sequence that can only be entered at the beginning and left at its end. i1: calc: add r3, r1, r2 i2: sub r4, r1, r2 i3: mul r4, r3, r4 i4: jn negproc Basic block lengths of 3.5-6.3, with an overall average of 4.9 (RISC: general 7.8 and scientific 31.6 ) Conditional Branch  control dependencies

Two other methods for speed up Potential speed-up embodied in loops amount to a figure of 50 (on average speed up) unlimited resources (about 50 processors, and about 400 registers) ideal schedule appropriate handling of control dependencies amount to 10 to 100 times speed up assuming perfect oracles that always pick the right path for conditional branch  control dependencies are the real obstacle in utilizing instruction-level parallelism!

What do we do without a perfect oracle? execute all possible paths in conditional branch there are 2^N paths for N conditional branches pursuing an exponential increasing number of paths would be an unrealistic approach. Make your Best Guess branch prediction pursuing both possible paths but restrict the number of subsequent conditional branches (more more Ch. 8)

How close can real systems come to the upper limits of speed-up? ambitious processors can expect to achieve speed-up figures of about 4 for general purpose programs 10-20 for scientific programs an ambitious processor: predicts conditional branches has 256 integer and 256 FP register eliminates false data dependencies through register renaming, performs a perfect memory disambiguation maintains a gliding instruction window of 64 items