CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs1 CIS 501: Computer Architecture Unit 11: Data-Level Parallelism: Vectors & GPUs Slides developed.

Slides:



Advertisements
Similar presentations
1 Review of Chapters 3 & 4 Copyright © 2012, Elsevier Inc. All rights reserved.
Advertisements

Vectors, SIMD Extensions and GPUs COMP 4611 Tutorial 11 Nov. 26,
Systems and Technology Group © 2006 IBM Corporation Cell Programming Tutorial - JHD24 May 2006 Cell Programming Tutorial Jeff Derby, Senior Technical Staff.
Carnegie Mellon Today Program optimization  Optimization blocker: Memory aliasing  Out of order processing: Instruction level parallelism  Understanding.
Superscalar and VLIW Architectures Miodrag Bolic CEG3151.
Intel Pentium 4 ENCM Jonathan Bienert Tyson Marchuk.
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Chapter 3 Instruction Set Architecture Advanced Computer Architecture COE 501.
CIS 501: Comp. Arch. | Prof. Joe Devietti | Superscalar1 CIS 501: Computer Architecture Unit 8: Superscalar Pipelines Slides developed by Joe Devietti,
Lecture 38: Chapter 7: Multiprocessors Today’s topic –Vector processors –GPUs –An example 1.
ENGS 116 Lecture 101 ILP: Software Approaches Vincent H. Berk October 12 th Reading for today: , 4.1 Reading for Friday: 4.2 – 4.6 Homework #2:
Computer Organization and Architecture (AT70.01) Comp. Sc. and Inf. Mgmt. Asian Institute of Technology Instructor: Dr. Sumanta Guha Slide Sources: Based.
CPE 631: ILP, Static Exploitation Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic,
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture VLIW Steve Ko Computer Sciences and Engineering University at Buffalo.
Breaking SIMD Shackles with an Exposed Flexible Microarchitecture and the Access Execute PDG Venkatraman Govindaraju, Tony Nowatzki, Karthikeyan Sankaralingam.
Lecture 6: Multicore Systems
Machine/Assembler Language Putting It All Together Noah Mendelsohn Tufts University Web:
Instructor Notes We describe motivation for talking about underlying device architecture because device architecture is often avoided in conventional.
1 Agenda AVX overview Proposed AVX ABI changes −For IA-32 −For x86-64 AVX and vectorizer infrastructure. Ongoing projects by Intel gcc team: −Stack alignment.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures Computer Architecture A.
POLITECNICO DI MILANO Parallelism in wonderland: are you ready to see how deep the rabbit hole goes? ILP: VLIW Architectures Marco D. Santambrogio:
CS252 Graduate Computer Architecture Spring 2014 Lecture 9: VLIW Architectures Krste Asanovic
The University of Adelaide, School of Computer Science
1 COMP 740: Computer Architecture and Implementation Montek Singh Tue, Feb 24, 2009 Topic: Instruction-Level Parallelism IV (Software Approaches/Compiler.
1 Advanced Computer Architecture Limits to ILP Lecture 3.
Lecture Objectives: 1)Define pipelining 2)Calculate the speedup achieved by pipelining for a given number of instructions. 3)Define how pipelining improves.
Click to add text © IBM Corporation Optimization Issues in SSE/AVX-compatible functions on PowerPC Ian McIntosh November 5, 2014.
Superscalar Organization Prof. Mikko H. Lipasti University of Wisconsin-Madison Lecture notes based on notes by John P. Shen Updated by Mikko Lipasti.
CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs1 CIS 501: Computer Architecture Unit 11: Data-Level Parallelism: Vectors & GPUs Slides developed.
NATIONAL POLYTECHNIC INSTITUTE COMPUTING RESEARCH CENTER IPN-CICMICROSE Lab Design and implementation of a Multimedia Extension for a RISC Processor Eduardo.
Compsci 220 / ECE 252 (Lebeck)1 Compsci 220/ ECE 252 Computer Architecture Data Parallel: Vectors, SIMD, GPGPU Slides originally developed by Amir Roth.
Simultaneous Multithreading: Maximizing On-Chip Parallelism Presented By: Daron Shrode Shey Liggett.
Classifying GPR Machines TypeNumber of Operands Memory Operands Examples Register- Register 30 SPARC, MIPS, etc. Register- Memory 21 Intel 80x86, Motorola.
GPU Architecture Overview
University of Amsterdam Computer Systems – optimizing program performance Arnoud Visser 1 Computer Systems Optimizing program performance.
December 2, 2015Single-Instruction Multiple Data (SIMD)1 Performance Optimization, cont. How do we fix performance problems?
1. 2 Pipelining vs. Parallel processing  In both cases, multiple “things” processed by multiple “functional units” Pipelining: each thing is broken into.
Introduction to MMX, XMM, SSE and SSE2 Technology
November 22, 1999The University of Texas at Austin Native Signal Processing Ravi Bhargava Laboratory of Computer Architecture Electrical and Computer.
1)Leverage raw computational power of GPU  Magnitude performance gains possible.
Introdution to SSE or How to put your algorithms on steroids! Christian Kerl
Floating Point Numbers & Parallel Computing. Outline Fixed-point Numbers Floating Point Numbers Superscalar Processors Multithreading Homogeneous Multiprocessing.
GPU Architecture Overview Patrick Cozzi University of Pennsylvania CIS Fall 2013.
® GDC’99 Streaming SIMD Extensions Overview Haim Barad Project Leader/Staff Engineer Media Team Haifa, Israel Intel Corporation March.
Copyright © Curt Hill SIMD Single Instruction Multiple Data.
SSE and SSE2 Jeremy Johnson Timothy A. Chagnon All images from Intel® 64 and IA-32 Architectures Software Developer's Manuals.
3/12/2013Computer Engg, IIT(BHU)1 CUDA-3. GPGPU ● General Purpose computation using GPU in applications other than 3D graphics – GPU accelerates critical.
My Coordinates Office EM G.27 contact time:
GPU Architecture Overview Patrick Cozzi University of Pennsylvania CIS Fall 2012.
SIMD Programming CS 240A, Winter Flynn* Taxonomy, 1966 In 2013, SIMD and MIMD most common parallelism in architectures – usually both in same.
1 Lecture 5a: CPU architecture 101 boris.
CS 352H: Computer Systems Architecture
SIMD Multimedia Extensions
CPE 731 Advanced Computer Architecture ILP: Part V – Multiple Issue
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
Vector Processing => Multimedia
MMX Multi Media eXtensions
Pipelining: Advanced ILP
Machine-Dependent Optimization
Code Optimization(II)
CC423: Advanced Computer Architecture ILP: Part V – Multiple Issue
EE 193: Parallel Computing
CSC3050 – Computer Architecture
The University of Adelaide, School of Computer Science
Lecture 11: Machine-Dependent Optimization
CSE 502: Computer Architecture
Presentation transcript:

CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs1 CIS 501: Computer Architecture Unit 11: Data-Level Parallelism: Vectors & GPUs Slides developed by Joe Devietti, Milo Martin & Amir Roth at UPenn with sources that included University of Wisconsin slides by Mark Hill, Guri Sohi, Jim Smith, and David Wood

How to Compute This Fast? Performing the same operations on many data items Example: SAXPY Instruction-level parallelism (ILP) - fine grained Loop unrolling with static scheduling –or– dynamic scheduling Wide-issue superscalar (non-)scaling limits benefits Thread-level parallelism (TLP) - coarse grained Multicore Can we do some “medium grained” parallelism? L1:ldf [X+r1]->f1 // I is in r1 mulf f0,f1->f2 // A is in f0 ldf [Y+r1]->f3 addf f2,f3->f4 stf f4->[Z+r1} addi r1,4->r1 blti r1,4096,L1 for (I = 0; I < 1024; I++) { Z[I] = A*X[I] + Y[I]; } 2CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs

Data-Level Parallelism Data-level parallelism (DLP) Single operation repeated on multiple data elements SIMD (Single-Instruction, Multiple-Data) Less general than ILP: parallel insns are all same operation Exploit with vectors Old idea: Cray-1 supercomputer from late 1970s Eight 64-entry x 64-bit floating point “vector registers” 4096 bits (0.5KB) in each register! 4KB for vector register file Special vector instructions to perform vector operations Load vector, store vector (wide memory operation) Vector+Vector or Vector+Scalar addition, subtraction, multiply, etc. In Cray-1, each instruction specifies 64 operations! ALUs were expensive, so one operation per cycle (not parallel) CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs3

4 Example Vector ISA Extensions (SIMD) Extend ISA with floating point (FP) vector storage … Vector register: fixed-size array of 32- or 64- bit FP elements Vector length: For example: 4, 8, 16, 64, … … and example operations for vector length of 4 Load vector: ldf.v [X+r1]->v1 ldf [X+r1+0]->v1 0 ldf [X+r1+1]->v1 1 ldf [X+r1+2]->v1 2 ldf [X+r1+3]->v1 3 Add two vectors: addf.vv v1,v2->v3 addf v1 i,v2 i ->v3 i (where i is 0,1,2,3) Add vector to scalar: addf.vs v1,f2,v3 addf v1 i,f2->v3 i (where i is 0,1,2,3) Today’s vectors: short (128 or 256 bits), but fully parallel

CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs5 Example Use of Vectors – 4-wide Operations Load vector: ldf.v [X+r1]->v1 Multiply vector to scalar: mulf.vs v1,f2->v3 Add two vectors: addf.vv v1,v2->v3 Store vector: stf.v v1->[X+r1] Performance? Best case: 4x speedup But, vector instructions don’t always have single-cycle throughput Execution width (implementation) vs vector width (ISA) ldf [X+r1]->f1 mulf f0,f1->f2 ldf [Y+r1]->f3 addf f2,f3->f4 stf f4->[Z+r1] addi r1,4->r1 blti r1,4096,L1 ldf.v [X+r1]->v1 mulf.vs v1,f0->v2 ldf.v [Y+r1]->v3 addf.vv v2,v3->v4 stf.v v4,[Z+r1] addi r1,16->r1 blti r1,4096,L1 7x1024 instructions7x256 instructions (4x fewer instructions)

Vector Datapath & Implementatoin Vector insn. are just like normal insn… only “wider” Single instruction fetch (no extra N 2 checks) Wide register read & write (not multiple ports) Wide execute: replicate floating point unit (same as superscalar) Wide bypass (avoid N 2 bypass problem) Wide cache read & write (single cache tag check) Execution width (implementation) vs vector width (ISA) Example: Pentium 4 and “Core 1” executes vector ops at half width “Core 2” executes them at full width Because they are just instructions… …superscalar execution of vector instructions Multiple n-wide vector instructions per cycle CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs6

7 Intel’s SSE2/SSE3/SSE4/AVX… Intel SSE2 (Streaming SIMD Extensions 2) bit floating point registers ( xmm0–xmm15 ) Each can be treated as 2x64b FP or 4x32b FP (“packed FP”) Or 2x64b or 4x32b or 8x16b or 16x8b ints (“packed integer”) Or 1x64b or 1x32b FP (just normal scalar floating point) Original SSE: only 8 registers, no packed integer support Other vector extensions AMD 3DNow!: 64b (2x32b) PowerPC AltiVEC/VMX: 128b (2x64b or 4x32b) ARM NEON: 128b (2x64b, 4x32b, 8x16b) Looking forward for x86 Intel’s “Sandy Bridge” brought 256-bit vectors to x86 Intel’s “Xeon Phi” multicore brings 512-bit vectors to x86

CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs8 Other Vector Instructions These target specific domains: e.g., image processing, crypto Vector reduction (sum all elements of a vector) Geometry processing: 4x4 translation/rotation matrices Saturating (non-overflowing) subword add/sub: image processing Byte asymmetric operations: blending and composition in graphics Byte shuffle/permute: crypto Population (bit) count: crypto Max/min/argmax/argmin: video codec Absolute differences: video codec Multiply-accumulate: digital-signal processing Special instructions for AES encryption More advanced (but in Intel’s Xeon Phi) Scatter/gather loads: indirect store (or load) from a vector of pointers Vector mask: predication (conditional execution) of specific elements

Using Vectors in Your Code Write in assembly Ugh Use “intrinsic” functions and data types For example: _mm_mul_ps() and “__m128” datatype Use vector data types typedef double v2df __attribute__ ((vector_size (16))); Use a library someone else wrote Let them do the hard work Matrix and linear algebra packages Let the compiler do it (automatic vectorization, with feedback) GCC’s “-ftree-vectorize” option, -ftree-vectorizer-verbose=n Limited impact for C/C++ code (old, hard problem) 9CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs

Recap: Vectors for Exploiting DLP Vectors are an efficient way of capturing parallelism Data-level parallelism Avoid the N 2 problems of superscalar Avoid the difficult fetch problem of superscalar Area efficient, power efficient The catch? Need code that is “vector-izable” Need to modify program (unlike dynamic-scheduled superscalar) Requires some help from the programmer Looking forward: Intel “Xeon Phi” (aka Larrabee) vectors More flexible (vector “masks”, scatter, gather) and wider Should be easier to exploit, more bang for the buck CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs10

Graphics Processing Units (GPU) Tesla S870 Killer app for parallelism: graphics (3D games) CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs11

GPUs and SIMD/Vector Data Parallelism How do GPUs have such high peak FLOPS & FLOPS/Joule? Exploit massive data parallelism – focus on total throughput Remove hardware structures that accelerate single threads Specialized for graphs: e.g., data-types & dedicated texture units “SIMT” execution model Single instruction multiple threads Similar to both “vectors” and “SIMD” A key difference: better support for conditional control flow Program it with CUDA or OpenCL Extensions to C Perform a “shader task” (a snippet of scalar computation) over many elements Internally, GPU uses scatter/gather and vector mask operations CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs12

following slides c/o Kayvon Fatahalian’s “Beyond Programmable Shading” course CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs13

diffuse reflection example CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs16 diffuse reflection specular reflection c/o

SIMD vs SIMT SIMD: single insn multiple data write 1 insn that operates on a vector or data handle control flow via explicit masking operations SIMT: single insn multiple thread write 1 insn that operates on scalar data each of many threads runs this insn compiler+hw aggregate threads into groups that execute on SIMD hardware compiler+hw handle masking for control flow CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs35

Data Parallelism Summary Data Level Parallelism “medium-grained” parallelism between ILP and TLP Still one flow of execution (unlike TLP) Compiler/programmer must explicitly expresses it (unlike ILP) Hardware support: new “wide” instructions (SIMD) Wide registers, perform multiple operations in parallel Trends Wider: 64-bit (MMX, 1996), 128-bit (SSE2, 2000), 256-bit (AVX, 2011), 512-bit (Xeon Phi, 2012?) More advanced and specialized instructions GPUs Embrace data parallelism via “SIMT” execution model Becoming more programmable all the time Today’s chips exploit parallelism at all levels: ILP, DLP, TLP CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs50

CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs51

Tomorrow’s “CPU” Vectors CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs52

Beyond Today’s Vectors Today’s vectors are limited Wide compute Wide load/store of consecutive addresses Allows for “SOA” (structures of arrays) style parallelism Looking forward (and backward)... Vector masks Conditional execution on a per-element basis Allows vectorization of conditionals Scatter/gather a[i] = b[y[i]] b[y[i]] = a[i] Helps with sparse matrices, “AOS” (array of structures) parallelism Together, enables a different style vectorization Translate arbitrary (parallel) loop bodies into vectorized code (later) CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs53

Vector Masks (Predication) Recall “cmov” prediction to avoid branches Vector Masks: 1 bit per vector element Implicit predicate in all vector operations for (I=0; I<N; I++) if (mask I ) { vop… } Usually stored in a “scalar” register (up to 64-bits) Used to vectorize loops with conditionals in them cmp_eq.v, cmp_lt.v, etc.: sets vector predicates for (I=0; I<32; I++) if (X[I] != 0.0) Z[I] = A/X[I]; ldf.v [X+r1] -> v1 cmp_ne.v v1,f0 -> r2 // 0.0 is in f0 divf.sv {r2} v1,f1 -> v2 // A is in f1 stf.v {r2} v2 -> [Z+r1] CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs54

Scatter Stores & Gather Loads How to vectorize: for(int i = 1, i<N, i++) { int bucket = val[i] / scalefactor; found[bucket] = 1; Easy to vectorize the divide, but what about the load/store? Solution: hardware support for vector “scatter stores” stf.v v2->[r1+v1] Each address calculated from r1+v1 i stf v2 0 ->[r1+v1 0 ], stf v2 1 ->[r1+v1 1 ], stf v2 2 ->[r1+v1 2 ], stf v2 3 ->[r1+v1 3 ] Vector “gather loads” defined analogously ldf.v [r1+v1]->v2 Scatter/gathers slower than regular vector load/store ops Still provides a throughput advantage over non-vector version CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs55

CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs56

SAXPY Example: Best Case Code void saxpy(float* x, float* y, float* z, float a, int length) { for (int i = 0; i < length; i++) { z[i] = a*x[i] + y[i]; } } Scalar.L3: movss (%rdi,%rax), %xmm1 mulss %xmm0, %xmm1 addss (%rsi,%rax), %xmm1 movss %xmm1, (%rdx,%rax) addq $4, %rax cmpq %rcx, %rax jne.L3 Auto Vectorized.L6: movaps (%rdi,%rax), %xmm1 mulps %xmm2, %xmm1 addps (%rsi,%rax), %xmm1 movaps %xmm1, (%rdx,%rax) addq $16, %rax incl %r8d cmpl %r8d, %r9d ja.L6 + Scalar loop to handle last few iterations (if length % 4 != 0) “mulps”: multiply packed ‘single’ CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs57

SAXPY Example: Actual Code void saxpy(float* x, float* y, float* z, float a, int length) { for (int i = 0; i < length; i++) { z[i] = a*x[i] + y[i]; } Scalar.L3: movss (%rdi,%rax), %xmm1 mulss %xmm0, %xmm1 addss (%rsi,%rax), %xmm1 movss %xmm1, (%rdx,%rax) addq $4, %rax cmpq %rcx, %rax jne.L3 Auto Vectorized.L8: movaps %xmm3, %xmm1 movaps %xmm3, %xmm2 movlps (%rdi,%rax), %xmm1 movlps (%rsi,%rax), %xmm2 movhps 8(%rdi,%rax), %xmm1 movhps 8(%rsi,%rax), %xmm2 mulps %xmm4, %xmm1 incl %r8d addps %xmm2, %xmm1 movaps %xmm1, (%rdx,%rax) addq $16, %rax cmpl %r9d, %r8d jb.L8 + Explicit alignment test + Explicit aliasing test CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs58

Bridging “Best Case” and “Actual” Align arrays typedef float afloat __attribute__ ((__aligned__(16))); void saxpy(afloat* x, afloat* y, afloat* z, float a, int length) { for (int i = 0; i < length; i++) { z[i] = a*x[i] + y[i]; } } Avoid aliasing check typedef float afloat __attribute__ ((__aligned__(16))); void saxpy(afloat* __restrict__ x, afloat* __restrict__ y, afloat* __restrict__ z, float a, int length) Even with both, still has the “last few iterations” code CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs59

Reduction Example Code float diff = 0.0; for (int i = 0; i < N; i++) { diff += (a[i] - b[i]); } return diff; Scalar.L4: movss (%rdi,%rax), %xmm1 subss (%rsi,%rax), %xmm1 addq $4, %rax addss %xmm1, %xmm0 cmpq %rdx, %rax jne.L4 Auto Vectorized.L7: movaps (%rdi,%rax), %xmm0 incl %ecx subps (%rsi,%rax), %xmm0 addq $16, %rax addps %xmm0, %xmm1 cmpl %ecx, %r8d ja.L7 haddps %xmm1, %xmm1 movaps %xmm1, %xmm0 je.L3 “haddps”: Packed Single- FP Horizontal Add CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs60

SSE2 on Pentium 4 CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs61

Using Vectors in Your Code CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs62

Today’s GPU’s “SIMT” Model CIS 501: Comp. Arch. | Prof. Joe Devietti | Vectors & GPUs63