L3: Memory Hierarchy Optimization I, Locality and Data Placement CS6235 L3: Memory Hierarchy, 1.

Slides:



Advertisements
Similar presentations
CS179: GPU Programming Lecture 5: Memory. Today GPU Memory Overview CUDA Memory Syntax Tips and tricks for memory handling.
Advertisements

L15: Review for Midterm. Administrative Project proposals due today at 5PM (hard deadline) – handin cs6963 prop March 31, MIDTERM in class L15: Review.
Stanford University CS243 Winter 2006 Wei Li 1 Loop Transformations and Locality.
L13: Review for Midterm. Administrative Project proposals due Friday at 5PM (hard deadline) No makeup class Friday! March 23, Guest Lecture Austin Robison,
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Structuring Parallel Algorithms.
L6: Memory Hierarchy Optimization III, Bandwidth Optimization CS6963.
L4: Memory Hierarchy Optimization II, Locality and Data Placement, cont. CS L4: Memory Hierarchy, II.
Parallel Programming using CUDA. Traditional Computing Von Neumann architecture: instructions are sent from memory to the CPU Serial execution: Instructions.
L4: Memory Hierarchy Optimization I, Locality and Data Placement CS6963.
L8: Memory Hierarchy Optimization, Bandwidth CS6963.
L3: Memory Hierarchy Optimization I, Locality and Data Placement CS L3: Memory Hierarchy, 1.
L7: Memory Hierarchy Optimization IV, Bandwidth Optimization and Case Studies CS6963.
L6: Memory Hierarchy Optimization IV, Bandwidth Optimization CS6963.
L7: Memory Hierarchy Optimization, cont. CS6963. Administrative Homework #2, posted on website – Due 5PM, Thursday, February 19 – Use handin program to.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Programming Massively Parallel Processors.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30 – July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 7: Threading Hardware in G80.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Memory Hardware in G80.
09/15/2011CS4961 CS4961 Parallel Programming Lecture 8: Dependences and Locality Optimizations Mary Hall September 15,
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
CS6235 Review for Midterm. 2 CS6235 Administrative Pascal will meet the class on Wednesday -I will join at the beginning for questions on test Midterm.
L19: Advanced CUDA Issues November 10, Administrative CLASS CANCELLED, TUESDAY, NOVEMBER 17 Guest Lecture, November 19, Ganesh Gopalakrishnan Thursday,
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 12 Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395 Winter 2014 Lecture 17 Introduction to Accelerator.
L3: Memory Hierarchy Optimization I, Locality and Data Placement CS6235 L3: Memory Hierarchy, 1.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 9: Memory Hardware in G80.
1. Could I be getting better performance? Probably a little bit. Most of the performance is handled in HW How much better? If you compile –O3, you can.
09/21/2010CS4961 CS4961 Parallel Programming Lecture 9: Red/Blue and Introduction to Data Locality Mary Hall September 21,
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow/ Thread Execution.
CUDA Performance Patrick Cozzi University of Pennsylvania CIS Fall
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Basic Parallel Programming Concepts Computational.
CUDA - 2.
CUDA Performance Considerations (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Threading Hardware in G80.
L17: CUDA, cont. Execution Model and Memory Hierarchy November 6, 2012.
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core Processors for Science and Engineering.
Efficient Parallel CKY Parsing on GPUs Youngmin Yi (University of Seoul) Chao-Yue Lai (UC Berkeley) Slav Petrov (Google Research) Kurt Keutzer (UC Berkeley)
© David Kirk/NVIDIA and Wen-mei W. Hwu University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 10 Reduction Trees.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Lecture 13: Basic Parallel.
CUDA Memory Types All material not from online sources/textbook copyright © Travis Desell, 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana Champaign 1 Programming Massively Parallel Processors CUDA Memories.
1 2D Convolution, Constant Memory and Constant Caching © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois,
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Programming Massively Parallel.
CS/EE 217 GPU Architecture and Parallel Programming Lectures 4 and 5: Memory Model and Locality © David Kirk/NVIDIA and Wen-mei W. Hwu,
L14: CUDA, cont. Execution Model and Memory Hierarchy October 27, 2011.
10/01/2009CS4961 CS4961 Parallel Programming Lecture 12/13: Introduction to Locality Mary Hall October 1/3,
L17: CUDA, cont. October 28, Final Project Purpose: -A chance to dig in deeper into a parallel programming model and explore concepts. -Present.
09/13/2012CS4230 CS4230 Parallel Programming Lecture 8: Dense Linear Algebra and Locality Optimizations Mary Hall September 13,
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana Champaign 1 ECE 498AL Programming Massively Parallel Processors.
L6: Memory Hierarchy Optimization I, Data Placement CS6963.
L5: Memory Hierarchy Optimization II, Locality and Data Placement, cont. CS6963.
EECE571R -- Harnessing Massively Parallel Processors ece
GPU Memory Details Martin Kruliš by Martin Kruliš (v1.1)
CS4961 Parallel Programming Lecture 11: Data Locality, cont
L4: Memory Hierarchy Optimization II, Locality and Data Placement, cont. CS6235 L4: Memory Hierarchy, II.
L18: CUDA, cont. Memory Hierarchy and Examples
CS4961 Parallel Programming Lecture 12: Data Locality, cont
L3: Memory Hierarchy Optimization I, Locality and Data Placement
L4: Memory Hierarchy Optimization II, Locality and Data Placement
L15: CUDA, cont. Memory Hierarchy and Examples
L6: Memory Hierarchy Optimization IV, Bandwidth Optimization
L4: Memory Hierarchy Optimization II, Locality and Data Placement, cont. CS6963 L4: Memory Hierarchy, II.
L6: Memory Hierarchy Optimization IV, Bandwidth Optimization
© David Kirk/NVIDIA and Wen-mei W. Hwu,
6- General Purpose GPU Programming
Introduction to Optimization
Presentation transcript:

L3: Memory Hierarchy Optimization I, Locality and Data Placement CS6235 L3: Memory Hierarchy, 1

Administrative Assignment due Friday, Jan. 18, 5 PM – Use handin program on CADE machines “handin CS6235 lab1 ” Mailing list – Please use for all questions suitable for the whole class Feel free to answer your classmates questions! CS6235 L3: Memory Hierarchy, 1

Overview of Lecture Where data can be stored And how to get it there Some guidelines for where to store data – Who needs to access it? – Read only vs. Read/Write – Footprint of data High level description of how to write code to optimize for memory hierarchy – More details Wednesday and next week Reading: – Chapter 5, Kirk and Hwu book – Or, pter4-CudaMemoryModel.pdf CS6235 L3: Memory Hierarchy, 1

Targets of Memory Hierarchy Optimizations Reduce memory latency – The latency of a memory access is the time (usually in cycles) between a memory request and its completion Maximize memory bandwidth – Bandwidth is the amount of useful data that can be retrieved over a time interval Manage overhead – Cost of performing optimization (e.g., copying) should be less than anticipated gain CS6235 L3: Memory Hierarchy, 1

Optimizing the Memory Hierarchy on GPUs, Overview Device memory access times non-uniform so data placement significantly affects performance. But controlling data placement may require additional copying, so consider overhead. Optimizations to increase memory bandwidth. Idea: maximize utility of each memory access. Coalesce global memory accesses Avoid memory bank conflicts to increase memory access parallelism Align data structures to address boundaries CS6235 { Today’s Lecture L3: Memory Hierarchy, 1

Hardware Implementation: Memory Architecture The local, global, constant, and texture spaces are regions of device memory (DRAM) Each multiprocessor has: – A set of 32-bit registers per processor – On-chip shared memory Where the shared memory space resides – A read-only constant cache To speed up access to the constant memory space – A read-only texture cache To speed up access to the texture memory space – Data cache (Fermi only) Global, constant, texture memories © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign Device Multiprocessor N Multiprocessor 2 Multiprocessor 1 Device memory Shared Memory Instruction Unit Processor 1 Registers … Processor 2 Registers Processor M Registers Constant Cache Texture Cache Data Cache, Fermi only

L3: Memory Hierarchy, 1 Memory Hierarchy Terminology MemoryLocationCachedAccessWhoLatency RegisterOn-chipResidentRead/writeOne threadO(1 cycle) SharedOn-chipResidentRead/writeThreads in blockO(1 cycle) w/o bank conflict GlobalOff-chipNo/YesRead/writeAll threads + hostO(1)- O(100) cycles, depending on if cached LocalOff-chipNoRead/writeOne threadO(1)- O(100) cycles, depending on if cached ConstantOff-chipYesRead onlyAll threads + host (host may write) O(1)-O(100) cycles, depending on if cached TextureOff-chipYesRead onlyAll threads + host (host may write) O(1)- O(100) cycles, depending on if cached SurfaceOff-chipYesRead/writeAll threads+hostO(1)-O(100) cycles, depending on if cached

Reuse and Locality Consider how data is accessed – Data reuse: Same data used multiple times Intrinsic in computation – Data locality: Data is reused and is present in “fast memory” Same data or same data transfer If a computation has reuse, what can we do to get locality? Appropriate data placement and layout Code reordering transformations CS6235 L3: Memory Hierarchy, 1

Data Placement: Conceptual Copies from host to device go to some part of global memory (possibly, constant or texture memory) How to use SP shared memory Must construct or be copied from global memory by kernel program How to use constant or texture cache – Read-only “reused” data can be placed in constant & texture memory by host Also, how to use registers – Most locally-allocated data is placed directly in registers – Even array variables can use registers if compiler understands access patterns – Can allocate “superwords” to registers, e.g., float4 – Excessive use of registers will “spill” data to local memory Local memory – Deals with capacity limitations of registers – Eliminates worries about race conditions – … but SLOW CS6235 L3: Memory Hierarchy, 1

Data Placement: Syntax Through type qualifiers – __constant__, __shared__, __local__, __device__ Through cudaMemcpy calls – Flavor of call and symbolic constant designate where to copy Implicit default behavior – Device memory without qualifier is global memory – Host by default copies to global memory – Thread-local variables go into registers unless capacity exceeded, then local memory CS6235 L3: Memory Hierarchy, 1

Rest of Today’s Lecture Mechanics of how to place data in shared memory and constant memory Tiling transformation to reuse data within – Shared memory – Data cache (Fermi only) – Constant memory L3: Memory Hierarchy, 1

Now Let’s Look at Shared Memory Common Programming Pattern (5.3 of CUDA 4 manual) – Load data into shared memory – Synchronize (if necessary) – Operate on data in shared memory – Synchronize (if necessary) – Write intermediate results to global memory – Repeat until done Shared memory Global memory CS6235 L3: Memory Hierarchy, 1

Mechanics of Using Shared Memory __shared__ type qualifier required Must be allocated from global/device function, or as “extern” Examples: extern __shared__ float d_s_array[]; /* a form of dynamic allocation */ /* MEMSIZE is size of per-block */ /* shared memory*/ __host__ void outerCompute() { compute >>(); } __global__ void compute() { d_s_array[i] = …; } __global__ void compute2() { __shared__ float d_s_array[M]; // create or copy from global memory d_s_array[j] = …; //synchronize threads before use __syncthreads(); … = d_s_array[x]; // now can use any element // more synchronization needed if updated // may write result back to global memory d_g_array[j] = d_s_array[j]; } CS6235 L3: Memory Hierarchy, 1

Reuse and Locality Consider how data is accessed – Data reuse: Same data used multiple times Intrinsic in computation – Data locality: Data is reused and is present in “fast memory” Same data or same data transfer If a computation has reuse, what can we do to get locality? Appropriate data placement and layout Code reordering transformations CS6235 L3: Memory Hierarchy, 1

Temporal Reuse in Sequential Code Same data used in distinct iterations I and I’ for (i=1; i<N-1; i++) for (j=1; j<N-1; j++) A[j]= A[j]+A[j+1]+A[j-1] A[j] has self-temporal reuse in loop i Temporal reuse between A[j] and A[j-1] across iterations I=[i,j] and I’=[i,j+1] CS6235 L3: Memory Hierarchy, 1

Spatial Reuse Same data transfer (usually cache line) used in distinct iterations I and I’ ∙A[j] has self-spatial reuse in loop j ∙ Spatial reuse between A[j-1], A[j] and A[j+1] in each statement Multi-dimensional array note: C arrays are stored in row-major order for (i=1; i<N; i++) for (j=1; j<N; j++) A[j]= A[j]+A[j+1]+A[j-1]; CS6235 L3: Memory Hierarchy, 1

Loop Permutation: A Reordering Transformation for (j=0; j<6; j++) for (i= 0; i<3; i++) A[i][j+1]=A[i][j]+B[j] ; for (i= 0; i<3; i++) for (j=0; j<6; j++) A[i][j+1]=A[i][j]+B[j] ; i j new traversal order! i j Permute the order of the loops to modify the traversal order Which one is better for row-major storage? CS L4: Memory Hierarchy I

Safety of Permutation Ok to permute? for (i= 0; i<3; i++) for (j=0; j<6; j++) A[i][j+1]=A[i][j]+B[j] ; for (i= 0; i<3; i++) for (j=0; j<6; j++) A[i+1][j-1]=A[i][j] +B[j]; CS L4: Memory Hierarchy I Intuition: Cannot permute two loops i and j in a loop nest if doing so changes the relative order of a read and write or two writes to the same memory location

Tiling (Blocking): Loop Reordering Transformation Tiling reorders loop iterations to bring iterations that reuse data closer in time J I J I CS6235 L3: Memory Hierarchy, 1

Tiling Example for (j=0; j<M; j++) for (i=0; i<N; i++) D[i] = D[i] + B[j][i]; for (j=0; j<M; j++) for (ii=0; ii<N; ii+=s) for (i=ii; i<min(ii+s-1,N); i++) D[i] = D[i] +B[j][i]; Strip mine for (ii=0; ii<N; ii+=s) for (j=0; j<M; j++) for (i=ii; i<min(ii+s-1,N); i++) D[i] = D[i] + B[j][i]; Permute CS6235 L3: Memory Hierarchy, 1

Legality of Tiling Tiling is safe only if it does not change the order in which memory locations are read/written – We’ll talk about correctness after memory hierarchies Tiling can conceptually be used to perform the decomposition into threads and blocks (computation partitioning) Tiling is also used to reduce the footprint of data to fit in limited capacity storage (more later) L3: Memory Hierarchy, 1

CUDA Version of Example (Tiling for Computation Partitioning) L4: Memory Hierarchy, II 22 Block dimension Thread dimension Loop within Thread for (ii=0; ii<N; ii+=s) for (i=ii; i<min(ii+s-1,N); i++) for (j=0; j<N; j++) D[i] = D[i] + B[j][i]; … >>(d_D, d_B, N); … __global__ ComputeI (float *d_D, float *d_B, int N) { int ii = blockIdx.x; int i = ii*s + threadIdx.x; for (j=0; j<N; j++) d_D[i] = d_D[i] + d_B[j*N+i]; }

Tiling for Limited Capacity Storage Tiling can be used hierarchically to compute partial results on a block of data wherever there are capacity limitations – Between grids if total data exceeds global memory capacity – Across thread blocks if shared data exceeds shared memory capacity (also to partition computation across blocks and threads) – Within threads if data in constant cache exceeds cache capacity or data in registers exceeds register capacity or (as in example) data in shared memory for block still exceeds shared memory capacity CS6235 L3: Memory Hierarchy, 1

Memory Hierarchy Example: Matrix vector multiply for (i=0; i<n; i++) { for (j=0; j<n; j++) { a[i] += c[j][i] * b[j]; } Remember to: Consider correctness of parallelization strategy (next week) Exploit locality in shared memory and registers

Let’s Take a Closer Look Implicitly use tiling to decompose parallel computation into independent work Additional tiling is used to map portions of “b” to shared memory since it is shared across threads “a” has reuse within a thread so use a register

Resulting CUDA code (Automatically Generated by our Research Compiler) __global__ mv_GPU(float* a, float* b, float** c) { int bx = blockIdx.x; int tx = threadIdx.x; __shared__ float bcpy[32]; double acpy = a[tx + 32 * bx]; for (k = 0; k < 32; k++) { bcpy[tx] = b[32 * k + tx]; __syncthreads(); //this loop is actually fully unrolled for (j = 32 * k; j <= 32 * k + 32; j++) { acpy = acpy + c[j][32 * bx + tx] * bcpy[j]; } __synchthreads(); } a[tx + 32 * bx] = acpy; }

Summary of Lecture How to place data in constant memory and shared memory Introduction to permutation and tiling transformation Matrix-vector multiply example CS6235 L3: Memory Hierarchy, 1