Presentation is loading. Please wait.

Presentation is loading. Please wait.

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads.

Similar presentations


Presentation on theme: "© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads."— Presentation transcript:

1 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

2 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 2 CUDA Thread Block All threads in a block execute the same kernel program (SPMD) Programmer declares block: –Block size 1 to 512 concurrent threads –Block shape 1D, 2D, or 3D –Block dimensions in threads Threads have thread id numbers within block –Thread program uses thread id to select work and address shared data Threads in the same block share data and synchronize while doing their share of the work Threads in different blocks cannot cooperate –Each block can execute in any order relative to other blocks! CUDA Thread Block Thread Id #: 0 1 2 3 … m Thread program Courtesy: John Nickolls, NVIDIA

3 CUDA Thread Organization Implementation blockIdx & threadIdx: unique coordinates of threads used to distinguish threads & identify for each thread the appropriate portion of the data to process: –Assigned to threads by the CUDA runtime system –Appear as built-in variables that are initialized by the runtime system and accessed within the kernel functions –References to the blockIdx and threadIdx variables return the appropriate values that form coordinates of the thread gridDim & blockDim: Built in variables which provide the dimensions of the grid and block © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 3

4 4 Example Thread Organization M=8 threads/block (1D organization) –threadIdx.x = 0, 1, 2, …, 7 N thread blocks (1D organization) –blockIdx.x = 0, 1, …, N-1 8*N threads/grid: blockDim.x =8 ; gridDim.x =N In the code of the figure –threadID=blockIdx.x*blockDim.x + threadIdx.x –For Thread 3 of Block 5, threadID = 5*8+ 3 =43 … Thread Block 0 Thread Block 1 Thread Block N - 1 … float x = input[threadID]; float y = func(x); output[threadID] = y; … threadID … float x = input[threadID]; float y = func(x); output[threadID] = y; … float x = input[threadID]; float y = func(x); output[threadID] = y; … 7654321076543210 76543210 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE 498AL Spring 2010, University of Illinois, Urbana-Champaign

5 General Thread Organization Grid: 2D array of blocks; Block: 3D array of threads Execution Configuration determines exact organization In the example of previous slide, if N=128 & M=32, then the following Execution Configuration is used Dim3 dimGrid(128,1,1); //dimGrid.x, dimGrid.y take values 1 to 65,535 Dim3 dimBlock(32,1,1); // size of block limited to 512 threads KernelFunction >>(….); blockIdx.x ranges between 0 & dimGrid.x-1 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 5

6 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 6 Another Example Dim3 dimGrid(2,2,1) Dim3 dimBlock(4,2,2); KernelFunction<<<dimGrid, dimBlock>>>(….); Block(1,0) has blockIdx.x=1 & blockIdx.y=0 Thread(2,1,0) has threadIdx.x=2 ;threadIdx.y=1; threadIdx.z=0

7 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 7 Md Nd Pd Pd sub TILE_WIDTH WIDTH Tx= threadIdx.x 01 TILE_WIDTH-1 2 Bx= blockIdx.x 012 By= blockIdx.y ty 2 1 0 TILE_WIDTH-1 2 1 0 TILE_WIDTHE WIDTH Matrix Multiplication Using Multiple Blocks Break-up Pd into tiles Each thread block calculates one tile; Block size equal tile size Each thread calculates one Pd element identified using –blockIdx.x & blockIdx.y to identify the tile –hreadIdx.x, threadIdx.y to identify the thtread within the tile

8 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 8 Md Nd Pd Pd sub TILE_WIDTH WIDTH tx 01 TILE_WIDTH-1 2 bx 012 by ty 2 1 0 TILE_WIDTH-1 2 1 0 TILE_WIDTHE WIDTH Revised Matrix Multiplication using Multiple Blocks (cont.) the y index of the Pd element computed by a thread is y= by*TILE_WIDTH + ty the x index of the Pd element computed by a thread is x = bx*TILE_WIDTH + tx Thread (tx,ty) in block (bx, by) uses row y of Md and column x of Nd to update element Pd[y*Width+x]

9 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 9 P 1,0 P 0,0 P 0,1 P 2,0 P 3,0 P 1,1 P 0,2 P 2,2 P 3,2 P 1,2 P 3,1 P 2,1 P 0,3 P 2,3 P 3,3 P 1,3 Block(0,0)Block(1,0) Block(1,1)Block(0,1) TILE_WIDTH = 2 A Small Example: P(4,4) For Block(1,0) blockIdx.x=1 & blockIdx.y=0 For Block(0,0) blockIdx.x=0 & blockIdx.y=0 For Block(1,1) blockIdx.x=1 & blockIdx.y=1 For Block(0,1) blockIdx.x=0 & blockIdx.y=1

10 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 10 Pd 1,0 A Small Example: Multiplication Md 2,0 Md 1,1 Md 1,0 Md 0,0 Md 0,1 Md 3,0 Md 2,1 Pd 0,0 Md 3,1 Pd 0,1 Pd 2,0 Pd 3,0 Nd 0,3 Nd 1,3 Nd 1,2 Nd 1,1 Nd 1,0 Nd 0,0 Nd 0,1 Nd 0,2 Pd 1,1 Pd 0,2 Pd 2,2 Pd 3,2 Pd 1,2 Pd 3,1 Pd 2,1 Pd 0,3 Pd 2,3 Pd 3,3 Pd 1,3 thread(0,0) of block(0,0) calculates Pd 0,0 thread(0,0) of block(0,1) calculates Pd 0,2 thread(1,1) of block(0,0) calculates Pd 1,1 thread(1,1) of block(0,1) calculates Pd 1,3

11 Revised Host Code for Launching the Revised Kernel //Setup the execution configuration Dim3 dimGrid(Width/TILE_WIDTH, Width/TILE_WIDTH,1); Dim3 dimBlock(TILE_WIDTH, TILE_WIDTH,1); //Launch the device computation threads KernelFunction >> MatrixMulKernel(Md, Nd, Pd, Width); //Kernel can handle arrays of dimensions up to 1,048,560 X 1,048,560{16 X 65,535=1,048,560} using 65,535 X 65,535= 4,294,836,225 blocks each with 256 threads Total number of parallel threads = 1,099,478,073,600 > 1 Tera (10 12 )threads © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 11

12 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 12 Revised Matrix Multiplication Kernel using Multiple Blocks __global__ void MatrixMulKernel(float* Md, float* Nd, float* Pd, int Width) { // Calculate the row index of the Pd element and M int Row = blockIdx.y*TILE_WIDTH + threadIdx.y; // Calculate the column idenx of Pd and N int Col = blockIdx.x*TILE_WIDTH + threadIdx.x; float Pvalue = 0; // each thread computes one element of the block sub-matrix for (int k = 0; k < Width; ++k) Pvalue += Md[Row*Width+k] * Nd[k*Width+Col]; Pd[Row*Width+Col] = Pvalue; }

13 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 13 Transparent Scalability Cuda runtime system can execute blocks in any order –A kernel scales across any number of execution resources Transparent Scalability: The ability to execute the same application on hardware with different # of execution resources Device Block 0Block 1 Block 2Block 3 Block 4Block 5 Block 6Block 7 Each block can execute in any order relative to other blocks. Execution with few resources Kernel grid Block 0Block 1 Block 2Block 3 Block 4Block 5 Block 6Block 7 Device Block 0Block 1Block 2Block 3 Block 4Block 5Block 6Block 7 time Execution with more resources

14 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 14 Thread Block Assignment to Streaming Multiprocessors Upon Kernel launch, threads are assigned to Streaming Multiprocessors, SMs in block granularity –Up to 8 blocks assigned to each SM as resources allows –SM in GT200 can take up to 1024 threads Could be 256 (threads/block) * 4 blocks Or 128 (threads/block) * 8 blocks, etc t0 t1 t2 … tm Blocks SP Shared Memory MT IU SP Shared Memory MT IU t0 t1 t2 … tm Blocks SM 1SM 0

15 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 15 Thread Block Assignment to Streaming Multiprocessors (cont.) t0 t1 t2 … tm Blocks SP Shared Memory MT IU SP Shared Memory MT IU t0 t1 t2 … tm Blocks SM 1SM 0 GT200 has 30 SMs Up to 240 (30*8) blocks execute simultaneously Up to 30,720 (30*1024) concurrent threads can be residing in SMs for execution SM maintains thread/block id #s SM manages/schedules thread execution

16 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 16 GT200 Example: Thread Scheduling Each Block is divided into 32- thread Warps for scheduling purposes –Size of warp is implementation dependant –not part of the CUDA programming model –Warp is the unit of thread scheduling in SM If 3 blocks are assigned to an SM and each block has 256 threads, how many Warps are there in an SM? –Each Block is divided into 256/32 = 8 Warps –There are 8 * 3 = 24 Warps in each SM … t0 t1 t2 … t31 … … … Block 1 WarpsBlock 2 Warps SP SFU SP SFU Instruction Fetch/Dispatch Instruction L1 Streaming Multiprocessor Shared Memory … t0 t1 t2 … t31 … Block 1 Warps

17 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 17 GT200 Example: Thread Scheduling (Cont.) SM implements zero-overhead warp scheduling: At any time, only one of the warps is executed by SM Warps whose next instruction has its operands ready for consumption are eligible for execution Eligible Warps are selected for execution on a prioritized scheduling policy All threads in a warp execute the same instruction when selected (SIMD execution)

18 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 18 GT200 Block Granularity Considerations For Matrix Multiplication using multiple blocks, should I use 8X8, 16X16 or 32X32 blocks? –For 8X8 blocks, we have 64 threads per Block. Since each SM can take up to 1024 threads, there are 16 Blocks (1024/64). However, each SM can only take up to 8 Blocks. Hence only 512 (64*8) threads will go into each SM! SM execution resources are under utilized; fewer wraps to schedule around long latency operations –For 16X16 blocks, we have 256 threads per Block. Since each SM can take up to 1024 threads, it can take up to 4 Blocks (1024/256) Full thread capacity in each SM Maximal number of warps for scheduling around long-latency operations (1024/32= 32 wraps) –For 32X32 blocks, we have 1024 threads per Block. Not even one can fit into an SM! (512 threads /block limitation)

19 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 19 Some Additional API Features

20 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 20 Application Programming Interface The API is an extension to the C programming language It consists of: –Language extensions To target portions of the code for execution on the device –A runtime library split into: 1.A host component to control and access one or more devices from the host 2.A device component providing device-specific functions 3.A common component providing built-in vector types and a subset of the C runtime library in both host and device codes

21 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 21 Language Extensions: Built-in Variables dim3 gridDim; –Dimensions of the grid in blocks ( gridDim.z unused) dim3 blockDim; –Dimensions of the block in threads dim3 blockIdx; –Block index within the grid dim3 threadIdx; –Thread index within the block

22 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 22 Common Runtime Component: Mathematical Functions pow, sqrt, cbrt, hypot exp, exp2, expm1 log, log2, log10, log1p sin, cos, tan, asin, acos, atan, atan2 sinh, cosh, tanh, asinh, acosh, atanh ceil, floor, trunc, round Etc. –When executed on the host, a given function uses the C runtime implementation if available –These functions are only supported for scalar types, not vector types

23 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 23 Device Runtime Component: Mathematical Functions Some mathematical functions (e.g. sin(x) ) have a less accurate, but faster device-only version (e.g. __sin(x) ) –__pow –__log, __log2, __log10 –__exp –__sin, __cos, __tan

24 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 24 Host Runtime Component Provides functions to deal with: –Device management (including multi-device systems) –Memory management –Error handling Initializes the first time a runtime function is called A host thread can invoke device code on only one device –Multiple host threads required to run on multiple devices

25 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 25 Device Runtime Component: Synchronization Function void __syncthreads(); Synchronizes all threads in a block Once all threads have reached this point, execution resumes normally Used to avoid RAW / WAR / WAW hazards when accessing shared or global memory Allowed in conditional constructs only if the conditional is uniform across the entire thread block


Download ppt "© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads."

Similar presentations


Ads by Google