© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads.

Slides:



Advertisements
Similar presentations
Taking CUDA to Ludicrous Speed Getting Righteous Performance from your GPU 1.
Advertisements

CUDA More on Blocks/Threads. 2 Debugging Using the Device Emulation Mode An executable compiled in device emulation mode ( nvcc -deviceemu ) runs completely.
©Wen-mei W. Hwu and David Kirk/NVIDIA, SSL(2014), ECE408/CS483/ECE498AL, University of Illinois, ECE408/CS483 Applied Parallel Programming Lecture.
1 Threading Hardware in G80. 2 Sources Slides by ECE 498 AL : Programming Massively Parallel Processors : Wen-Mei Hwu John Nickolls, NVIDIA.
1 CUDA Threads. © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 2 Block IDs and Thread IDs Each thread.
CUDA Grids, Blocks, and Threads
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
Prepared 6/22/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 7: Threading Hardware in G80.
More CUDA Examples. Different Levels of parallelism Thread parallelism – each thread is an independent thread of execution Data parallelism – across threads.
CUDA Programming. Floating Point Operations for the CPU and the GPU.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Memory Hardware in G80.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
Introduction to CUDA 2 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2013.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 3: The CUDA Memory Model.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors CUDA Threads.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core processors for Science and Engineering.
CIS 565 Fall 2011 Qing Sun
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395 Winter 2014 Lecture 17 Introduction to Accelerator.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395: CUDA Lecture 5 Memory coalescing (from.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 9: Memory Hardware in G80.
1 ECE 8823A GPU Architectures Module 3: CUDA Execution Model © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al, University of Illinois,
1. Could I be getting better performance? Probably a little bit. Most of the performance is handled in HW How much better? If you compile –O3, you can.
1 ECE 8823A GPU Architectures Module 5: Execution and Resources - I.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow/ Thread Execution.
CS/EE 217 GPU Architecture and Parallel Programming Lecture 3: Kernel-Based Data Parallel Execution Model © David Kirk/NVIDIA and Wen-mei Hwu,
CUDA - 2.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
Matrix Multiplication in CUDA
GPU PROGRAMMING GPU Programming 1. Assignment 4 Consists of two programming assignments Concurrency GPU programming Requires a computer with a CUDA/OpenCL/DirectCompute.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
Training Program on GPU Programming with CUDA 31 st July, 7 th Aug, 14 th Aug 2011 CUDA Teaching UoM.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Threading Hardware in G80.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 4: CUDA Threads – Part 2.
CUDA Parallel Execution Model with Fermi Updates © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al, University of Illinois, Urbana-Champaign.
ECE408/CS483 Applied Parallel Programming Lecture 4: Kernel-Based Data Parallel Execution Model © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al,
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
CUDA All material not from online sources/textbook copyright © Travis Desell, 2012.
Parallel Programming Basics  Things we need to consider:  Control  Synchronization  Communication  Parallel programming languages offer different.
©Wen-mei W. Hwu and David Kirk/NVIDIA Urbana, Illinois, August 2-5, 2010 VSCSE Summer School Proven Algorithmic Techniques for Many-core Processors Lecture.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 408, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 408, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Performance.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana Champaign 1 Programming Massively Parallel Processors CUDA Memories.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
1 ECE 8823A GPU Architectures Module 4: Memory Model and Locality © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al, University of Illinois,
CS/EE 217 GPU Architecture and Parallel Programming Lectures 4 and 5: Memory Model and Locality © David Kirk/NVIDIA and Wen-mei W. Hwu,
©Wen-mei W. Hwu and David Kirk/NVIDIA, University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 6: DRAM Bandwidth.
ECE408/CS483 Applied Parallel Programming Lecture 4: Kernel-Based Data Parallel Execution Model © David Kirk/NVIDIA and Wen-mei Hwu, , SSL 2014.
Introduction to CUDA (2 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 4: The CUDA Memory Model (Cont.)
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana Champaign 1 ECE 498AL Programming Massively Parallel Processors.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
ECE408/CS483 Applied Parallel Programming Lecture 7: DRAM Bandwidth
ECE 498AL Spring 2010 Lectures 8: Threading & Memory Hardware in G80
Slides from “PMPP” book
© David Kirk/NVIDIA and Wen-mei W. Hwu,
© David Kirk/NVIDIA and Wen-mei W. Hwu,
ECE498AL Spring 2010 Lecture 4: CUDA Threads – Part 2
ECE 8823A GPU Architectures Module 3: CUDA Execution Model -I
Mattan Erez The University of Texas at Austin
© David Kirk/NVIDIA and Wen-mei W. Hwu,
© David Kirk/NVIDIA and Wen-mei W. Hwu,
6- General Purpose GPU Programming
CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming
Presentation transcript:

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 2 CUDA Thread Block All threads in a block execute the same kernel program (SPMD) Programmer declares block: –Block size 1 to 512 concurrent threads –Block shape 1D, 2D, or 3D –Block dimensions in threads Threads have thread id numbers within block –Thread program uses thread id to select work and address shared data Threads in the same block share data and synchronize while doing their share of the work Threads in different blocks cannot cooperate –Each block can execute in any order relative to other blocks! CUDA Thread Block Thread Id #: … m Thread program Courtesy: John Nickolls, NVIDIA

CUDA Thread Organization Implementation blockIdx & threadIdx: unique coordinates of threads used to distinguish threads & identify for each thread the appropriate portion of the data to process: –Assigned to threads by the CUDA runtime system –Appear as built-in variables that are initialized by the runtime system and accessed within the kernel functions –References to the blockIdx and threadIdx variables return the appropriate values that form coordinates of the thread gridDim & blockDim: Built in variables which provide the dimensions of the grid and block © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 3

4 Example Thread Organization M=8 threads/block (1D organization) –threadIdx.x = 0, 1, 2, …, 7 N thread blocks (1D organization) –blockIdx.x = 0, 1, …, N-1 8*N threads/grid: blockDim.x =8 ; gridDim.x =N In the code of the figure –threadID=blockIdx.x*blockDim.x + threadIdx.x –For Thread 3 of Block 5, threadID = 5*8+ 3 =43 … Thread Block 0 Thread Block 1 Thread Block N - 1 … float x = input[threadID]; float y = func(x); output[threadID] = y; … threadID … float x = input[threadID]; float y = func(x); output[threadID] = y; … float x = input[threadID]; float y = func(x); output[threadID] = y; … © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL Spring 2010, University of Illinois, Urbana-Champaign

General Thread Organization Grid: 2D array of blocks; Block: 3D array of threads Execution Configuration determines exact organization In the example of previous slide, if N=128 & M=32, then the following Execution Configuration is used Dim3 dimGrid(128,1,1); //dimGrid.x, dimGrid.y take values 1 to 65,535 Dim3 dimBlock(32,1,1); // size of block limited to 512 threads KernelFunction >>(….); blockIdx.x ranges between 0 & dimGrid.x-1 © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 5

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 6 Another Example Dim3 dimGrid(2,2,1) Dim3 dimBlock(4,2,2); KernelFunction<<<dimGrid, dimBlock>>>(….); Block(1,0) has blockIdx.x=1 & blockIdx.y=0 Thread(2,1,0) has threadIdx.x=2 ;threadIdx.y=1; threadIdx.z=0

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 7 Md Nd Pd Pd sub TILE_WIDTH WIDTH Tx= threadIdx.x 01 TILE_WIDTH-1 2 Bx= blockIdx.x 012 By= blockIdx.y ty TILE_WIDTH TILE_WIDTHE WIDTH Matrix Multiplication Using Multiple Blocks Break-up Pd into tiles Each thread block calculates one tile; Block size equal tile size Each thread calculates one Pd element identified using –blockIdx.x & blockIdx.y to identify the tile –hreadIdx.x, threadIdx.y to identify the thtread within the tile

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 8 Md Nd Pd Pd sub TILE_WIDTH WIDTH tx 01 TILE_WIDTH-1 2 bx 012 by ty TILE_WIDTH TILE_WIDTHE WIDTH Revised Matrix Multiplication using Multiple Blocks (cont.) the y index of the Pd element computed by a thread is y= by*TILE_WIDTH + ty the x index of the Pd element computed by a thread is x = bx*TILE_WIDTH + tx Thread (tx,ty) in block (bx, by) uses row y of Md and column x of Nd to update element Pd[y*Width+x]

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 9 P 1,0 P 0,0 P 0,1 P 2,0 P 3,0 P 1,1 P 0,2 P 2,2 P 3,2 P 1,2 P 3,1 P 2,1 P 0,3 P 2,3 P 3,3 P 1,3 Block(0,0)Block(1,0) Block(1,1)Block(0,1) TILE_WIDTH = 2 A Small Example: P(4,4) For Block(1,0) blockIdx.x=1 & blockIdx.y=0 For Block(0,0) blockIdx.x=0 & blockIdx.y=0 For Block(1,1) blockIdx.x=1 & blockIdx.y=1 For Block(0,1) blockIdx.x=0 & blockIdx.y=1

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 10 Pd 1,0 A Small Example: Multiplication Md 2,0 Md 1,1 Md 1,0 Md 0,0 Md 0,1 Md 3,0 Md 2,1 Pd 0,0 Md 3,1 Pd 0,1 Pd 2,0 Pd 3,0 Nd 0,3 Nd 1,3 Nd 1,2 Nd 1,1 Nd 1,0 Nd 0,0 Nd 0,1 Nd 0,2 Pd 1,1 Pd 0,2 Pd 2,2 Pd 3,2 Pd 1,2 Pd 3,1 Pd 2,1 Pd 0,3 Pd 2,3 Pd 3,3 Pd 1,3 thread(0,0) of block(0,0) calculates Pd 0,0 thread(0,0) of block(0,1) calculates Pd 0,2 thread(1,1) of block(0,0) calculates Pd 1,1 thread(1,1) of block(0,1) calculates Pd 1,3

Revised Host Code for Launching the Revised Kernel //Setup the execution configuration Dim3 dimGrid(Width/TILE_WIDTH, Width/TILE_WIDTH,1); Dim3 dimBlock(TILE_WIDTH, TILE_WIDTH,1); //Launch the device computation threads KernelFunction >> MatrixMulKernel(Md, Nd, Pd, Width); //Kernel can handle arrays of dimensions up to 1,048,560 X 1,048,560{16 X 65,535=1,048,560} using 65,535 X 65,535= 4,294,836,225 blocks each with 256 threads Total number of parallel threads = 1,099,478,073,600 > 1 Tera (10 12 )threads © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 11

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 12 Revised Matrix Multiplication Kernel using Multiple Blocks __global__ void MatrixMulKernel(float* Md, float* Nd, float* Pd, int Width) { // Calculate the row index of the Pd element and M int Row = blockIdx.y*TILE_WIDTH + threadIdx.y; // Calculate the column idenx of Pd and N int Col = blockIdx.x*TILE_WIDTH + threadIdx.x; float Pvalue = 0; // each thread computes one element of the block sub-matrix for (int k = 0; k < Width; ++k) Pvalue += Md[Row*Width+k] * Nd[k*Width+Col]; Pd[Row*Width+Col] = Pvalue; }

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 13 Transparent Scalability Cuda runtime system can execute blocks in any order –A kernel scales across any number of execution resources Transparent Scalability: The ability to execute the same application on hardware with different # of execution resources Device Block 0Block 1 Block 2Block 3 Block 4Block 5 Block 6Block 7 Each block can execute in any order relative to other blocks. Execution with few resources Kernel grid Block 0Block 1 Block 2Block 3 Block 4Block 5 Block 6Block 7 Device Block 0Block 1Block 2Block 3 Block 4Block 5Block 6Block 7 time Execution with more resources

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 14 Thread Block Assignment to Streaming Multiprocessors Upon Kernel launch, threads are assigned to Streaming Multiprocessors, SMs in block granularity –Up to 8 blocks assigned to each SM as resources allows –SM in GT200 can take up to 1024 threads Could be 256 (threads/block) * 4 blocks Or 128 (threads/block) * 8 blocks, etc t0 t1 t2 … tm Blocks SP Shared Memory MT IU SP Shared Memory MT IU t0 t1 t2 … tm Blocks SM 1SM 0

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 15 Thread Block Assignment to Streaming Multiprocessors (cont.) t0 t1 t2 … tm Blocks SP Shared Memory MT IU SP Shared Memory MT IU t0 t1 t2 … tm Blocks SM 1SM 0 GT200 has 30 SMs Up to 240 (30*8) blocks execute simultaneously Up to 30,720 (30*1024) concurrent threads can be residing in SMs for execution SM maintains thread/block id #s SM manages/schedules thread execution

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 16 GT200 Example: Thread Scheduling Each Block is divided into 32- thread Warps for scheduling purposes –Size of warp is implementation dependant –not part of the CUDA programming model –Warp is the unit of thread scheduling in SM If 3 blocks are assigned to an SM and each block has 256 threads, how many Warps are there in an SM? –Each Block is divided into 256/32 = 8 Warps –There are 8 * 3 = 24 Warps in each SM … t0 t1 t2 … t31 … … … Block 1 WarpsBlock 2 Warps SP SFU SP SFU Instruction Fetch/Dispatch Instruction L1 Streaming Multiprocessor Shared Memory … t0 t1 t2 … t31 … Block 1 Warps

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 17 GT200 Example: Thread Scheduling (Cont.) SM implements zero-overhead warp scheduling: At any time, only one of the warps is executed by SM Warps whose next instruction has its operands ready for consumption are eligible for execution Eligible Warps are selected for execution on a prioritized scheduling policy All threads in a warp execute the same instruction when selected (SIMD execution)

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 18 GT200 Block Granularity Considerations For Matrix Multiplication using multiple blocks, should I use 8X8, 16X16 or 32X32 blocks? –For 8X8 blocks, we have 64 threads per Block. Since each SM can take up to 1024 threads, there are 16 Blocks (1024/64). However, each SM can only take up to 8 Blocks. Hence only 512 (64*8) threads will go into each SM! SM execution resources are under utilized; fewer wraps to schedule around long latency operations –For 16X16 blocks, we have 256 threads per Block. Since each SM can take up to 1024 threads, it can take up to 4 Blocks (1024/256) Full thread capacity in each SM Maximal number of warps for scheduling around long-latency operations (1024/32= 32 wraps) –For 32X32 blocks, we have 1024 threads per Block. Not even one can fit into an SM! (512 threads /block limitation)

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 19 Some Additional API Features

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 20 Application Programming Interface The API is an extension to the C programming language It consists of: –Language extensions To target portions of the code for execution on the device –A runtime library split into: 1.A host component to control and access one or more devices from the host 2.A device component providing device-specific functions 3.A common component providing built-in vector types and a subset of the C runtime library in both host and device codes

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 21 Language Extensions: Built-in Variables dim3 gridDim; –Dimensions of the grid in blocks ( gridDim.z unused) dim3 blockDim; –Dimensions of the block in threads dim3 blockIdx; –Block index within the grid dim3 threadIdx; –Thread index within the block

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 22 Common Runtime Component: Mathematical Functions pow, sqrt, cbrt, hypot exp, exp2, expm1 log, log2, log10, log1p sin, cos, tan, asin, acos, atan, atan2 sinh, cosh, tanh, asinh, acosh, atanh ceil, floor, trunc, round Etc. –When executed on the host, a given function uses the C runtime implementation if available –These functions are only supported for scalar types, not vector types

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 23 Device Runtime Component: Mathematical Functions Some mathematical functions (e.g. sin(x) ) have a less accurate, but faster device-only version (e.g. __sin(x) ) –__pow –__log, __log2, __log10 –__exp –__sin, __cos, __tan

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 24 Host Runtime Component Provides functions to deal with: –Device management (including multi-device systems) –Memory management –Error handling Initializes the first time a runtime function is called A host thread can invoke device code on only one device –Multiple host threads required to run on multiple devices

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 25 Device Runtime Component: Synchronization Function void __syncthreads(); Synchronizes all threads in a block Once all threads have reached this point, execution resumes normally Used to avoid RAW / WAR / WAW hazards when accessing shared or global memory Allowed in conditional constructs only if the conditional is uniform across the entire thread block