L6: Memory Hierarchy Optimization IV, Bandwidth Optimization

Slides:



Advertisements
Similar presentations
Intermediate GPGPU Programming in CUDA
Advertisements

1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 28, 2011 GPUMemories.ppt GPU Memories These notes will introduce: The basic memory hierarchy.
L15: Review for Midterm. Administrative Project proposals due today at 5PM (hard deadline) – handin cs6963 prop March 31, MIDTERM in class L15: Review.
L6: Memory Hierarchy Optimization III, Bandwidth Optimization CS6963.
L8: Memory Hierarchy Optimization, Bandwidth CS6963.
Programming with CUDA WS 08/09 Lecture 9 Thu, 20 Nov, 2008.
L7: Memory Hierarchy Optimization IV, Bandwidth Optimization and Case Studies CS6963.
L6: Memory Hierarchy Optimization IV, Bandwidth Optimization CS6963.
Ceng 545 Performance Considerations. Memory Coalescing High Priority: Ensure global memory accesses are coalesced whenever possible. Off-chip memory is.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30 – July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
CuMAPz: A Tool to Analyze Memory Access Patterns in CUDA
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Memory Hardware in G80.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
CUDA Performance Considerations (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 12 Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395: CUDA Lecture 5 Memory coalescing (from.
1 ECE 8823A GPU Architectures Module 5: Execution and Resources - I.
09/21/2010CS4961 CS4961 Parallel Programming Lecture 9: Red/Blue and Introduction to Data Locality Mary Hall September 21,
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow/ Thread Execution.
CUDA Performance Patrick Cozzi University of Pennsylvania CIS Fall
CUDA Performance Considerations (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
CUDA Performance Considerations (2 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
L17: CUDA, cont. Execution Model and Memory Hierarchy November 6, 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core Processors for Science and Engineering.
Programming with CUDA WS 08/09 Lecture 10 Tue, 25 Nov, 2008.
CUDA All material not from online sources/textbook copyright © Travis Desell, 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 10 Reduction Trees.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
L3: Memory Hierarchy Optimization I, Locality and Data Placement CS6235 L3: Memory Hierarchy, 1.
CUDA programming Performance considerations (CUDA best practices)
CS 179: GPU Computing Recitation 2: Synchronization, Shared memory, Matrix Transpose.
GPU Computing CIS-543 Lecture 09: Shared and Constant Memory
L6: Memory Hierarchy Optimization IV, Bandwidth Optimization
GPU Memory Details Martin Kruliš by Martin Kruliš (v1.1)
ECE 498AL Lectures 8: Bank Conflicts and Sample PTX code
ECE408 Fall 2015 Applied Parallel Programming Lecture 11 Parallel Computation Patterns – Reduction Trees © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al,
ECE408/CS483 Fall 2015 Applied Parallel Programming Lecture 7: DRAM Bandwidth ©Wen-mei W. Hwu and David Kirk/NVIDIA, ECE408/CS483/ECE498AL, University.
ECE408 Fall 2015 Applied Parallel Programming Lecture 21: Application Case Study – Molecular Dynamics.
ECE408/CS483 Applied Parallel Programming Lecture 7: DRAM Bandwidth
ECE 498AL Spring 2010 Lectures 8: Threading & Memory Hardware in G80
L4: Memory Hierarchy Optimization II, Locality and Data Placement, cont. CS6235 L4: Memory Hierarchy, II.
Recitation 2: Synchronization, Shared memory, Matrix Transpose
L18: CUDA, cont. Memory Hierarchy and Examples
© David Kirk/NVIDIA and Wen-mei W. Hwu,
DRAM Bandwidth Slide credit: Slides adapted from
© David Kirk/NVIDIA and Wen-mei W. Hwu,
L4: Memory Hierarchy Optimization II, Locality and Data Placement
Parallel Computation Patterns (Reduction)
Programming Massively Parallel Processors Performance Considerations
L15: CUDA, cont. Memory Hierarchy and Examples
ECE408 Applied Parallel Programming Lecture 14 Parallel Computation Patterns – Parallel Prefix Sum (Scan) Part-2 © David Kirk/NVIDIA and Wen-mei W.
ECE 498AL Lecture 15: Reductions and Their Implementation
L4: Memory Hierarchy Optimization II, Locality and Data Placement, cont. CS6963 L4: Memory Hierarchy, II.
ECE 8823A GPU Architectures Module 3: CUDA Execution Model -I
Mattan Erez The University of Texas at Austin
ECE 8823A GPU Architectures Module 5: Execution and Resources - I
L5: Memory Hierarchy, III
ECE 498AL Lecture 10: Control Flow
GPU Programming Lecture 5: Convolution
L6: Memory Hierarchy Optimization IV, Bandwidth Optimization
© David Kirk/NVIDIA and Wen-mei W. Hwu,
Parallel Computation Patterns (Stencil)
Mattan Erez The University of Texas at Austin
Mattan Erez The University of Texas at Austin
CS179: GPU PROGRAMMING Recitation 2 GPU Memory Synchronization
ECE 498AL Spring 2010 Lecture 10: Control Flow
L5: Memory Bandwidth Optimization
6- General Purpose GPU Programming
CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming
Presentation transcript:

L6: Memory Hierarchy Optimization IV, Bandwidth Optimization CS6235

Next assignment available Administrative Next assignment available Goals of assignment: simple memory hierarchy management block-thread decomposition tradeoff Due Tuesday, Feb. 9, 5PM Use handin program on CADE machines “handin CS6235 lab2 <probfile>” CS6235 L5: Memory Hierarchy, IV

Assignment 2: Memory Hierarchy Optimization Due Tuesday, February 7 at 5PM Sobel edge detection: Find the boundaries of the image where there is significant difference as compared to neighboring “pixels” and replace values to find edges for (i = 1; i < ImageNRows - 1; i++) for (j = 1; j < ImageNCols -1; j++) { sum1 = u[i-1][j+1] - u[i-1][j-1] + 2 * u[i][j+1] - 2 * u[i][j-1] + u[i+1][j+1] - u[i+1][j-1]; sum2 = u[i-1][j-1] + 2 * u[i-1][j] + u[i-1][j+1] - u[i+1][j-1] - 2 * u[i+1][j] - u[i+1][j+1]; magnitude = sum1*sum1 + sum2*sum2; if (magnitude > THRESHOLD) e[i][j] = 255; else e[i][j] = 0; } U E J  J  I  I  sum1 only sum2 only both CS6235

Example Input Output

General Approach “handin cs6235 lab2 <probfile>” 0. Provided a. Input file b. Sample output file c. CPU implementation Structure Compare CPU version and GPU version output [compareInt] Time performance of two GPU versions (see 2 & 3 below) [EventRecord] GPU version 1 (partial credit if correct) implementation using global memory GPU version 2 (highest points to best performing versions) use memory hierarchy optimizations from previous, current and Monday’s lecture 4. Extra credit: Try two different block / thread decompositions. What happens if you use more threads versus more blocks? What if you do more work per thread? Explain your choices in a README file. Handin using the following on CADE machines, where probfile includes all files “handin cs6235 lab2 <probfile>”

Overview Bandwidth optimization Reading: Global memory coalescing Avoiding shared memory bank conflicts A few words on alignment Reading: Chapter 4, Kirk and Hwu http://courses.ece.illinois.edu/ece498/al/textbook/Chapter4-CudaMemoryModel.pdf Chapter 5, Kirk and Hwu http://courses.ece.illinois.edu/ece498/al/textbook/Chapter5-CudaPerformance.pdf Sections 3.2.4 (texture memory) and 5.1.2 (bandwidth optimizations) of NVIDIA CUDA Programming Guide CS6235

Targets of Memory Hierarchy Optimizations Reduce memory latency The latency of a memory access is the time (usually in cycles) between a memory request and its completion Maximize memory bandwidth Bandwidth is the amount of useful data that can be retrieved over a time interval Manage overhead Cost of performing optimization (e.g., copying) should be less than anticipated gain CS6235

Optimizing the Memory Hierarchy on GPUs, Overview Device memory access times non-uniform so data placement significantly affects performance. But controlling data placement may require additional copying, so consider overhead. Optimizations to increase memory bandwidth. Idea: maximize utility of each memory access. Coalesce global memory accesses Avoid memory bank conflicts to increase memory access parallelism Align data structures to address boundaries More minor effects Partition camping to avoid global memory bank conflicts Use texture accesses to increase parallelism of memory accesses (if other global accesses are occurring simultaneously) [example later in the semester] CS6235

Data Location Impacts Latency of Memory Access Registers Can load in current instruction cycle Constant or Texture Memory In cache? Single address can be loaded for half-warp per cycle O/W, global memory access Global memory Shared memory Single cycle if accesses can be done in parallel CS6235

Introduction to Memory System Recall execution model for a multiprocessor Scheduling unit: A “warp” of threads is issued at a time (32 threads in current chips) Execution unit: Each cycle, 8 “cores” or SPs are executing (32 cores in a Fermi) Memory unit: Memory system scans a “half warp” or 16 threads for data to be loaded; (full warp for Fermi) CS6235

Global Memory Accesses Each thread issues memory accesses to data types of varying sizes, perhaps as small as 1 byte entities Given an address to load or store, memory returns/updates “segments” of either 32 bytes, 64 bytes or 128 bytes Maximizing bandwidth: Operate on an entire 128 byte segment for each memory transfer CS6235

Understanding Global Memory Accesses Memory protocol for compute capability 1.2 and 1.3* (CUDA Manual 5.1.2.1 and Appendix A.1) Start with memory request by smallest numbered thread. Find the memory segment that contains the address (32, 64 or 128 byte segment, depending on data type) Find other active threads requesting addresses within that segment and coalesce Reduce transaction size if possible Access memory and mark threads as “inactive” Repeat until all threads in half-warp are serviced *Includes Tesla and GTX platforms as well as new Linux machines! CS6235

For compute capability 1.0 and 1.1 Protocol for most systems (including lab6 machines) even more restrictive For compute capability 1.0 and 1.1 Threads must access the words in a segment in sequence The kth thread must access the kth word Alignment to the beginning of a segment becomes a very important optimization! CS6235

Memory Layout of a Matrix in C Consecutive threads will access different rows in memory. Each thread will require a different memory operation. Odd: But this is the RIGHT layout for a conventional multi-core! M0,0 M1,0 M2,0 M3,0 Access direction in Kernel code M0,1 M1,1 M2,1 M3,1 M0,2 M1,2 M2,2 M3,2 M0,3 M1,3 M2,3 M3,3 … Time Period 2 T1 T2 T3 T4 Time Period 1 T1 T2 T3 T4 M M0,0 M1,0 M2,0 M3,0 M0,1 M1,1 M2,1 M3,1 M0,2 M1,2 M2,2 M3,2 M0,3 M1,3 M2,3 M3,3 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE 498AL, University of Illinois, Urbana-Champaign

Memory Layout of a Matrix in C Each thread in a half-warp (assuming rows of 16 elements) will access consecutive memory locations. GREAT! All accesses are coalesced. With just a 4x4 block, we may need 4 separate memory operations to load data for a half-warp. Access direction in Kernel code M0,1 M1,1 M2,1 M3,1 M0,2 M1,2 M2,2 M3,2 M0,3 M1,3 M2,3 M3,3 Time Period 1 Time Period 2 … T1 T2 T3 T4 T1 T2 T3 T4 M M0,0 M1,0 M2,0 M3,0 M0,1 M1,1 M2,1 M3,1 M0,2 M1,2 M2,2 M3,2 M0,3 M1,3 M2,3 M3,3 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE 498AL, University of Illinois, Urbana-Champaign

How to find out compute capability See Appendix A.1 in NVIDIA CUDA Programming Guide to look up your device. Also, recall “deviceQuery” in SDK to learn about features of installed device. Linux lab, most CADE machines and Tesla cluster are Compute Capability 1.2 and 1.3. Fermi machines are 2.x. CS6235

Alignment Addresses accessed within a half-warp may need to be aligned to the beginning of a segment to enable coalescing An aligned memory address is a multiple of the memory segment size In compute 1.0 and 1.1 devices, address accessed by lowest numbered thread must be aligned to beginning of segment for coalescing In future systems, sometimes alignment can reduce number of accesses CS6235

May want to align structures More on Alignment Objects allocated statically or by cudaMalloc begin at aligned addresses But still need to think about index expressions May want to align structures struct __align__(8) { struct __align__(16) { float a; float a; float b; float b; }; float c; }; CS6235

What Can You Do to Improve Bandwidth to Global Memory? Think about spatial reuse and access patterns across threads May need a different computation & data partitioning May want to rearrange data in shared memory, even if no temporal reuse (transpose example) Similar issues, but much better in future hardware generations CS6235

Bandwidth to Shared Memory: Parallel Memory Accesses Consider each thread accessing a different location in shared memory Bandwidth maximized if each one is able to proceed in parallel Hardware to support this Banked memory: each bank can support an access on every memory cycle CS6235

How addresses map to banks on G80 Each bank has a bandwidth of 32 bits per clock cycle Successive 32-bit words are assigned to successive banks G80 has 16 banks So bank = address % 16 Same as the size of a half-warp No bank conflicts between different half-warps, only within a single half-warp © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE 498AL, University of Illinois, Urbana-Champaign

Shared memory bank conflicts Shared memory is as fast as registers if there are no bank conflicts The fast case: If all threads of a half-warp access different banks, there is no bank conflict If all threads of a half-warp access the identical address, there is no bank conflict (broadcast) The slow case: Bank Conflict: multiple threads in the same half-warp access the same bank Must serialize the accesses Cost = max # of simultaneous accesses to a single bank © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE 498AL, University of Illinois, Urbana-Champaign

Bank Addressing Examples No Bank Conflicts Linear addressing stride == 1 No Bank Conflicts Random 1:1 Permutation Bank 15 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Thread 15 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 15 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Thread 15 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE 498AL, University of Illinois, Urbana-Champaign

Bank Addressing Examples 2-way Bank Conflicts Linear addressing stride == 2 8-way Bank Conflicts Linear addressing stride == 8 Thread 15 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 9 Bank 8 Bank 15 Bank 7 Bank 2 Bank 1 Bank 0 x8 Thread 11 Thread 10 Thread 9 Thread 8 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 15 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE 498AL, University of Illinois, Urbana-Champaign

Putting It Together: Global Memory Coalescing and Bank Conflicts Let’s look at matrix transpose Simple goal: Replace A[i][j] with A[j][i] Any reuse of data? Do you think shared memory might be useful?

Matrix Transpose (from SDK) _global__ void transpose(float *odata, float *idata, int width, int height) { // read the element unsigned int xIndex = blockIdx.x * BLOCK_DIM + threadIdx.x; unsigned int yIndex = blockIdx.y * BLOCK_DIM + threadIdx.y; unsigned int index_in = yIndex * width + xIndex; temp = idata[index_in]; // write the transposed element to global memory xIndex = blockIdx.y * BLOCK_DIM + threadIdx.x; yIndex = blockIdx.x * BLOCK_DIM + threadIdx.y; unsigned int index_out = yIndex * height + xIndex; odata[index_out] = temp; } odata and idata in global memory CS6235

Coalesced Matrix Transpose _global__ void transpose(float *odata, float *idata, int width, int height) { __shared__ float block[BLOCK_DIM][BLOCK_DIM]; // read the matrix tile into shared memory unsigned int xIndex = blockIdx.x * BLOCK_DIM + threadIdx.x; unsigned int yIndex = blockIdx.y * BLOCK_DIM + threadIdx.y; unsigned int index_in = yIndex * width + xIndex; block[threadIdx.y][threadIdx.x] = idata[index_in]; __syncthreads(); // write the transposed matrix tile to global memory xIndex = blockIdx.y * BLOCK_DIM + threadIdx.x; yIndex = blockIdx.x * BLOCK_DIM + threadIdx.y; unsigned int index_out = yIndex * height + xIndex; odata[index_out] = block[threadIdx.x][threadIdx.y]; } odata and idata in global memory Rearrange in shared memory and write back efficiently to global memory CS6235

Optimized Matrix Transpose (from SDK) _global__ void transpose(float *odata, float *idata, int width, int height) { __shared__ float block[BLOCK_DIM][BLOCK_DIM+1]; // read the matrix tile into shared memory unsigned int xIndex = blockIdx.x * BLOCK_DIM + threadIdx.x; unsigned int yIndex = blockIdx.y * BLOCK_DIM + threadIdx.y; unsigned int index_in = yIndex * width + xIndex; block[threadIdx.y][threadIdx.x] = idata[index_in]; __syncthreads(); // write the transposed matrix tile to global memory xIndex = blockIdx.y * BLOCK_DIM + threadIdx.x; yIndex = blockIdx.x * BLOCK_DIM + threadIdx.y; unsigned int index_out = yIndex * height + xIndex; odata[index_out] = block[threadIdx.x][threadIdx.y]; } odata and idata in global memory Rearrange in shared memory and write back efficiently to global memory CS6235

Further Optimization: Partition Camping A further optimization improves bank conflicts in global memory But has not proven that useful in codes with additional computation Map blocks to different parts of chips int bid = blockIdx.x + gridDim.x*blockIdx.y;  by = bid%gridDim.y;  bx = ((bid/gridDim.y)+by)%gridDim.x;

Performance Results for Matrix Transpose (GTX280) SDK-prev: all optimizations other than partition camping CHiLL: generated by our compiler SDK-new: includes partition camping

Completion of bandwidth optimizations Summary of Lecture Completion of bandwidth optimizations Global memory coalescing Alignment Shared memory bank conflicts “Partitioning camping” Matrix transpose example CS6235

Synchronization mechanisms Next Time A look at correctness Synchronization mechanisms CS6235