© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, 2008 1 Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.

Slides:



Advertisements
Similar presentations
Taking CUDA to Ludicrous Speed Getting Righteous Performance from your GPU 1.
Advertisements

ECE 598HK Computational Thinking for Many-core Computing Lecture 2: Many-core GPU Performance Considerations © Wen-mei W. Hwu and David Kirk/NVIDIA,
©Wen-mei W. Hwu and David Kirk/NVIDIA, SSL(2014), ECE408/CS483/ECE498AL, University of Illinois, ECE408/CS483 Applied Parallel Programming Lecture.
Introduction to CUDA and CELL SpursEngine Multi-core Programming 1 Reference: 1. NVidia CUDA (Compute Unified Device Architecture) documents 2. Presentation.
Weekly Report- Matrix multiplications Ph.D. Student: Leo Lee date: Oct. 16, 2009.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Structuring Parallel Algorithms.
L4: Memory Hierarchy Optimization II, Locality and Data Placement, cont. CS L4: Memory Hierarchy, II.
1 CUDA Threads. © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 2 Block IDs and Thread IDs Each thread.
L4: Memory Hierarchy Optimization I, Locality and Data Placement CS6963.
L3: Memory Hierarchy Optimization I, Locality and Data Placement CS L3: Memory Hierarchy, 1.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30 – July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Memory Hardware in G80.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, 2008 Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
Introduction to CUDA 2 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2013.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 3: The CUDA Memory Model.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors CUDA Threads.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core processors for Science and Engineering.
CIS 565 Fall 2011 Qing Sun
CUDA Performance Considerations (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 12: Application Lessons When the tires.
L3: Memory Hierarchy Optimization I, Locality and Data Placement CS6235 L3: Memory Hierarchy, 1.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395: CUDA Lecture 5 Memory coalescing (from.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 9: Memory Hardware in G80.
1. Could I be getting better performance? Probably a little bit. Most of the performance is handled in HW How much better? If you compile –O3, you can.
1 ECE 8823A GPU Architectures Module 5: Execution and Resources - I.
CUDA - 2.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
Matrix Multiplication in CUDA
CUDA Performance Considerations (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
GPU PROGRAMMING GPU Programming 1. Assignment 4 Consists of two programming assignments Concurrency GPU programming Requires a computer with a CUDA/OpenCL/DirectCompute.
Training Program on GPU Programming with CUDA 31 st July, 7 th Aug, 14 th Aug 2011 CUDA Teaching UoM.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 GPU Programming with CUDA.
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core Processors for Science and Engineering.
©Wen-mei W. Hwu and David Kirk/NVIDIA Urbana, Illinois, August 2-5, 2010 VSCSE Summer School Proven Algorithmic Techniques for Many-core Processors Lecture.
CUDA Memory Types All material not from online sources/textbook copyright © Travis Desell, 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana Champaign 1 Programming Massively Parallel Processors CUDA Memories.
1 2D Convolution, Constant Memory and Constant Caching © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois,
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Programming Massively Parallel.
1 ECE 8823A GPU Architectures Module 4: Memory Model and Locality © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al, University of Illinois,
CS/EE 217 GPU Architecture and Parallel Programming Lectures 4 and 5: Memory Model and Locality © David Kirk/NVIDIA and Wen-mei W. Hwu,
©Wen-mei W. Hwu and David Kirk/NVIDIA, University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 6: DRAM Bandwidth.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads.
Introduction to CUDA (2 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
1 GPU Programming Lecture 5: Convolution © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois,
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 4: The CUDA Memory Model (Cont.)
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana Champaign 1 ECE 498AL Programming Massively Parallel Processors.
L6: Memory Hierarchy Optimization I, Data Placement CS6963.
GPU Computing CIS-543 Lecture 09: Shared and Constant Memory
ME964 High Performance Computing for Engineering Applications
ECE408/CS483 Fall 2015 Applied Parallel Programming Lecture 7: DRAM Bandwidth ©Wen-mei W. Hwu and David Kirk/NVIDIA, ECE408/CS483/ECE498AL, University.
ECE408/CS483 Applied Parallel Programming Lecture 7: DRAM Bandwidth
L4: Memory Hierarchy Optimization II, Locality and Data Placement, cont. CS6235 L4: Memory Hierarchy, II.
Slides from “PMPP” book
L18: CUDA, cont. Memory Hierarchy and Examples
ECE408/CS483 Applied Parallel Programming Lectures 5 and 6: Memory Model and Locality © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al,
© David Kirk/NVIDIA and Wen-mei W. Hwu,
© David Kirk/NVIDIA and Wen-mei W. Hwu,
L4: Memory Hierarchy Optimization II, Locality and Data Placement
Memory and Data Locality
ECE 8823A GPU Architectures Module 4: Memory Model and Locality
ECE 8823A GPU Architectures Module 3: CUDA Execution Model -I
© David Kirk/NVIDIA and Wen-mei W. Hwu,
© David Kirk/NVIDIA and Wen-mei W. Hwu,
Mattan Erez The University of Texas at Austin
Presentation transcript:

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience Lecture 3: CUDA Memories

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, CUDA Variable Type Qualifiers __device__ is optional when used with __local__, __shared__, or __constant__ Automatic variables without any qualifier reside in a register –Except arrays that reside in local memory Variable declarationMemoryScopeLifetime __device__ __local__ int LocalVar; localthread __device__ __shared__ int SharedVar; sharedblock __device__ int GlobalVar; globalgridapplication __device__ __constant__ int ConstantVar; constantgridapplication

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Where to Declare Variables? yesno global constant register (automatic) shared local

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, G80 Implementation of CUDA Memories Each thread can: –Read/write per-thread registers –Read/write per-thread local memory –Read/write per-block shared memory –Read/write per-grid global memory –Read/only per-grid constant memory Grid Global Memory Block (0, 0) Shared Memory Thread (0, 0) Registers Thread (1, 0) Registers Block (1, 0) Shared Memory Thread (0, 0) Registers Thread (1, 0) Registers Host Constant Memory

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, A Common Programming Strategy Global memory resides in device memory (DRAM) - much slower access than shared memory So, a profitable way of performing computation on the device is to tile data to take advantage of fast shared memory: –Partition data into subsets that fit into shared memory –Handle each data subset with one thread block by: Loading the subset from global memory to shared memory, using multiple threads to exploit memory-level parallelism Performing the computation on the subset from shared memory; each thread can efficiently multi-pass over any data element Copying results from shared memory to global memory

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, A Common Programming Strategy (Cont.) Constant memory also resides in device memory (DRAM) - much slower access than shared memory –But… cached! –Highly efficient access for read-only data Carefully divide data according to access patterns –R/Only  constant memory (very fast if in cache) –R/W shared within Block  shared memory (very fast) –R/W within each thread  registers (very fast) –R/W inputs/results  global memory (very slow) For texture memory usage, see courses.ece.uiuc.edu/ece498/al.

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Variable Type Restrictions Pointers can only point to memory allocated or declared in global memory: –Allocated in the host and passed to the kernel: __global__ void KernelFunc(float* ptr) –Obtained as the address of a global variable: float* ptr = &GlobalVar;

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, GPU Atomic Integer Operations Atomic operations on integers in global memory: –Associative operations on signed/unsigned ints –add, sub, min, max,... –and, or, xor –Increment, decrement –Exchange, compare and swap Requires hardware with compute capability 1.1

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Matrix Multiplication using Shared Memory

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Nd MdPd WIDTH Step 4: Kernel Function (cont.) for (int k = 0; k < Width; ++k) { float Melement = Md[ty * Width + k]; float Nelement = Nd[k * Width + tx]; Pvalue += Melement * Nelement; } // Write the matrix to device memory; // each thread writes one element Pd[ty * Width + tx] = Pvalue; } ty tx ty tx k k

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Grid Global Memory Block (0, 0) Shared Memory Thread (0, 0) Registers Thread (1, 0) Registers Block (1, 0) Shared Memory Thread (0, 0) Registers Thread (1, 0) Registers Host Constant Memory How about performance on G80? All threads access global memory for their input matrix elements –Two memory accesses (8 bytes) per floating point multiply-add –4B/s of memory bandwidth/FLOPS –4*346.5 = 1386 GB/s required to achieve peak FLOP rating –86.4 GB/s limits the code at 21.6 GFLOPS The actual code runs at about 15 GFLOPS Need to drastically cut down memory accesses to get closer to the peak GFLOPS

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Idea: Use Shared Memory to reuse global memory data Each input element is read by WIDTH threads. Load each element into Shared Memory and have several threads use the local version to reduce the memory bandwidth –Tiled algorithms M N P WIDTH ty tx

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Md Nd Pd Pd sub TILE_WIDTH WIDTH TILE_WIDTH bx tx 01 TILE_WIDTH by ty TILE_WIDTH TILE_WIDTH TILE_WIDTHE WIDTH Tiled Multiply Each block computes one square sub-matrix Pd sub of size TILE_WIDTH Each thread computes one element of Pd sub Assume that the dimensions of Md and Nd are multiples of TILE_WIDTH

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, First-order Size Considerations in G80 Each thread block should have many threads –TILE_WIDTH of 16 gives 16*16 = 256 threads There should be many thread blocks –A 1024*1024 Pd gives 64*64 = 4096 Thread Blocks Each thread block perform 2*256 = 512 float loads from global memory for 256 * (2*16) = 8,192 mul/add operations. –Memory bandwidth no longer a limiting factor

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, CUDA Code – Kernel Execution Configuration // Setup the execution configuration dim3 dimBlock(TILE_WIDTH, TILE_WIDTH); dim3 dimGrid(Width / TILE_WIDTH, Width / TILE_WIDTH);

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, CUDA Code – Kernel Overview // Block index int bx = blockIdx.x; int by = blockIdx.y; // Thread index int tx = threadIdx.x; int ty = threadIdx.y; // Pvalue stores the element of the block sub-matrix // that is computed by the thread – automatic variable! float Pvalue = 0; // Loop over all the sub-matrices of M and N // required to compute the block sub-matrix for (int m = 0; m < Width/TILE_WIDTH; ++m) { code from the next few slides };

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Md Nd Pd Pd sub TILE_WIDTH WIDTH TILE_WIDTH bx tx 01 TILE_WIDTH by ty TILE_WIDTH TILE_WIDTH TILE_WIDTHE WIDTH Tiled Multiply Each block computes one square sub-matrix Pd sub of size TILE_WIDTH Each thread computes one element of Pd sub m kbx by k m

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, CUDA Code - Load Data to Shared Memory // Get a pointer to the current sub-matrix Msub of M Float* Mdsub = GetSubMatrix(Md, m, by, Width); // Get a pointer to the current sub-matrix Nsub of N Float* Ndsub = GetSubMatrix(Nd, bx, m, Width); __shared__float Mds[TILE_WIDTH][TILE_WIDTH]; __shared__float Nds[TILE_WIDTH][TILE_WIDTH]; // each thread loads one element of the sub-matrix Mds[ty][tx] = GetMatrixElement(Mdsub, tx, ty); // each thread loads one element of the sub-matrix Nds[ty][tx] = GetMatrixElement(Ndsub, tx, ty);

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Md Nd Pd Pd sub TILE_WIDTH WIDTH TILE_WIDTH bx tx 01 TILE_WIDTH by ty TILE_WIDTH TILE_WIDTH TILE_WIDTHE WIDTH Tiled Multiply Each block computes one square sub-matrix Pd sub of size TILE_WIDTH Each thread computes one element of Pd sub m kbx by k m

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, CUDA Code - Compute Result // Synchronize to make sure the sub-matrices are loaded // before starting the computation __syncthreads(); // each thread computes one element of the block sub-matrix for (int k = 0; k < TILE_WIDTH; ++k) Pvalue += Mds[ty][k] * Nds[k][tx]; // Synchronize to make sure that the preceding // computation is done before loading two new // sub-matrices of M and N in the next iteration __syncthreads();

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, CUDA Code - Save Result // Get a pointer to the block sub-matrix of P Matrix Psub = GetSubMatrix(P, bx, by); // Write the block sub-matrix to device memory; // each thread writes one element SetMatrixElement(Psub, tx, ty, Pvalue); This code runs at about 45 GFLOPS on G80.

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Tying Up Some Loose Ends GetSubMatrix(Md, x, y, Width –Md + y*TILE_WIDTH*Width + x*TILE_WIDTH); GetMatrixElement(Mdsub,tx,ty, Width) –*(Mdsub+ty*Width+tx));

© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Global variables declaration –__host__ –__device__... __global__, __constant__, __texture__ Function prototypes –__global__ void kernelOne(…) –float handyFunction(…) Main () –allocate memory space on the device – cudaMalloc(&d_GlblVarPtr, bytes ) –transfer data from host to device – cudaMemCpy(d_GlblVarPtr, h_Gl…) –execution configuration setup –kernel call – kernelOne >>( args… ); –transfer results from device to host – cudaMemCpy(h_GlblVarPtr,…) –optional: compare against golden (host computed) solution Kernel – void kernelOne(type args,…) –variables declaration - __local__, __shared__ automatic variables transparently assigned to registers or local memory –syncthreads()… Other functions –float handyFunction(int inVar…); Summary- Typical Structure of a CUDA Program repeat as needed