1 ITCS 5/4010 Parallel computing, B. Wilkinson, Jan 14, 2013. CUDAMultiDimBlocks.ppt CUDA Grids, Blocks, and Threads These notes will introduce: One dimensional.

Slides:



Advertisements
Similar presentations
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 25, 2011 DeviceRoutines.pptx Device Routines and device variables These notes will introduce:
Advertisements

GPU programming: CUDA Acknowledgement: the lecture materials are based on the materials in NVIDIA teaching center CUDA course materials, including materials.
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 28, 2011 GPUMemories.ppt GPU Memories These notes will introduce: The basic memory hierarchy.
1 ITCS 5/4145 Parallel computing, B. Wilkinson, April 11, CUDAMultiDimBlocks.ppt CUDA Grids, Blocks, and Threads These notes will introduce: One.
GPU programming: CUDA Acknowledgement: the lecture materials are based on the materials in NVIDIA teaching center CUDA course materials, including materials.
More on threads, shared memory, synchronization
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 20, 2011 CUDA Programming Model These notes will introduce: Basic GPU programming model.
Tutorial on Distributed High Performance Computing 14:30 – 19:00 (2:30 pm – 7:00 pm) Wednesday November 17, 2010 Jornadas Chilenas de Computación 2010.
CUDA Programming Lei Zhou, Yafeng Yin, Yanzhi Ren, Hong Man, Yingying Chen.
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, March 5, 2011, 3-DBlocks.ppt Addressing 2-D grids with 3-D blocks Class Discussion Notes.
CUDA Grids, Blocks, and Threads
1 Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as a i,j and elements of.
1 ITCS 4/5010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 28, 2013, 3-DBlocks.ppt Addressing 2-D grids with 3-D blocks Class Discussion Notes.
© David Kirk/NVIDIA and Wen-mei W. Hwu, , SSL 2014, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
GPU Programming EPCC The University of Edinburgh.
An Introduction to Programming with CUDA Paul Richmond
More CUDA Examples. Different Levels of parallelism Thread parallelism – each thread is an independent thread of execution Data parallelism – across threads.
GPU Programming David Monismith Based on notes taken from the Udacity Parallel Programming Course.
CUDA Programming. Floating Point Operations for the CPU and the GPU.
1 ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013 CUDAProgModel.ppt CUDA Programming Model These notes will introduce: Basic GPU.
+ CUDA Antonyus Pyetro do Amaral Ferreira. + The problem The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now.
CIS 565 Fall 2011 Qing Sun
Lecture 8 : Manycore GPU Programming with CUDA Courtesy : Prof. Christopher Cooper’s and Prof. Chowdhury’s course note slides are used in this lecture.
CUDA programming (continue) Acknowledgement: the lecture materials are based on the materials in NVIDIA teaching center CUDA course materials, including.
1 ECE 8823A GPU Architectures Module 3: CUDA Execution Model © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al, University of Illinois,
1 ITCS 4/5010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2013 MemCoalescing.ppt Memory Coalescing These notes will demonstrate the effects.
Today’s lecture 2-Dimensional indexing Color Format Thread Synchronization within for- loops Shared Memory Tiling Review example programs Using Printf.
Parallel Processing1 GPU Program Optimization (CS 680) Parallel Programming with CUDA * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Lecture 6: Shared-memory Computing with GPU. Free download NVIDIA CUDA a-downloads CUDA programming on visual studio.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
Training Program on GPU Programming with CUDA 31 st July, 7 th Aug, 14 th Aug 2011 CUDA Teaching UoM.
1 SC12 The International Conference for High Performance Computing, Networking, Storage and Analysis Salt Lake City, Utah. Workshop 119: An Educator's.
Killdevil Running CUDA programs on cluster. Requesting permission bin/unc_id/services bin/unc_id/services.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 Introduction to CUDA C (Part 2)
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
OpenCL Joseph Kider University of Pennsylvania CIS Fall 2011.
CUDA All material not from online sources/textbook copyright © Travis Desell, 2012.
Parallel Programming Basics  Things we need to consider:  Control  Synchronization  Communication  Parallel programming languages offer different.
CUDA Memory Types All material not from online sources/textbook copyright © Travis Desell, 2012.
Introduction to CUDA CAP 4730 Spring 2012 Tushar Athawale.
1 GPU programming Dr. Bernhard Kainz. 2 Dr Bernhard Kainz Overview About myself Motivation GPU hardware and system architecture GPU programming languages.
Lecture 8 : Manycore GPU Programming with CUDA Courtesy : SUNY-Stony Brook Prof. Chowdhury’s course note slides are used in this lecture note.
© David Kirk/NVIDIA and Wen-mei W. Hwu, CS/EE 217 GPU Architecture and Programming Lecture 2: Introduction to CUDA C.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE 8823A GPU Architectures Module 2: Introduction.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
GPU Performance Optimisation Alan Gray EPCC The University of Edinburgh.
1 ITCS 4/5145GPU Programming, UNC-Charlotte, B. Wilkinson, Nov 4, 2013 CUDAProgModel.ppt CUDA Programming Model These notes will introduce: Basic GPU programming.
Introduction to CUDA Programming CUDA Programming Introduction Andreas Moshovos Winter 2009 Some slides/material from: UIUC course by Wen-Mei Hwu and David.
Matrix Multiplication in CUDA Kyeo-Reh Park Kyeo-Reh Park Nuclear & Quantum EngineeringNuclear & Quantum Engineering.
1 Workshop 9: General purpose computing using GPUs: Developing a hands-on undergraduate course on CUDA programming SIGCSE The 42 nd ACM Technical.
CUDA Programming Model
Device Routines and device variables
CUDA Grids, Blocks, and Threads
Antonio R. Miele Marco D. Santambrogio Politecnico di Milano
Memory Coalescing These notes will demonstrate the effects of memory coalescing Use of matrix transpose to improve matrix multiplication performance B.
Using Shared memory These notes will demonstrate the improvements achieved by using shared memory, with code and results running on coit-grid06.uncc.edu.
Device Routines and device variables
ECE 8823A GPU Architectures Module 3: CUDA Execution Model -I
Antonio R. Miele Marco D. Santambrogio Politecnico di Milano
ECE 8823A GPU Architectures Module 2: Introduction to CUDA C
CUDA Grids, Blocks, and Threads
General Purpose Graphics Processing Units (GPGPUs)
CUDA Programming Model
CUDA Programming Model
Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as ai,j and elements of B as.
Chapter 4:Parallel Programming in CUDA C
Quiz Questions CUDA ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizCUDA.ppt Nov 12, 2014.
6- General Purpose GPU Programming
Parallel Computing 18: CUDA - I
Presentation transcript:

1 ITCS 5/4010 Parallel computing, B. Wilkinson, Jan 14, CUDAMultiDimBlocks.ppt CUDA Grids, Blocks, and Threads These notes will introduce: One dimensional and multidimensional grids and blocks How the grid and block structures are defined in CUDA Predefined CUDA variables Adding vectors using one-dimensional structures Adding/multiplying arrays using 2-dimensional structures

2 Grids, Blocks, and Threads NVIDIA GPUs consist of an array of execution cores, each of which can support a large number of threads, many more than number of cores. Threads grouped into “blocks” Blocks can be 1, 2, or 3 dimensional Each kernel call uses a “grid” of blocks Grids can be 1, 2, or 3 dimensional (3-D available for recent GPUs) Programmer needs to specify grid/block organization on each kernel call (which can be different each time), within limits set by the GPU

3 Can be 1 or 2 dimensions (or 3 for comp. cap 2.x+ see next) Can be 1, 2 or 3 dimensions CUDA C programming guide, v 3.2, 2010, NVIDIA CUDA SIMT Thread Structure Allows flexibility and efficiency in processing 1D, 2-D, and 3-D data on GPU. Linked to internal organization Threads in one block execute together.

4 NVIDIA defines “compute capabilities”, 1.0, 1.1, … with limits and features supported. Compute capability 1.0 (min)2.x* 3.0 Grid: Max dimensionality233 Max size of each dimension (x, y, z) – 1 (no of blocks in each dimension) (2,147,483,647) Blocks: Max dimensionality333 Max sizes of x- and y- dimension Max size of z- dimension Max number of threads per block overall Device characteristics -- some limitations * Our C2050s are compute capability 2.0. As of mid 2012, compute capabilities up to 3.x

5 Need to provide each kernel call with values for: Number of blocks in each dimension Threads per block in each dimension myKernel >>(arg1, … ); B – a structure that defines number of blocks in grid in each dimension (1D, 2D, or 3D). T – a structure that defines number of threads in a block in each dimension (1D, 2D, or 3D). Defining Grid/Block Structure

6 1-D grid and/or 1-D blocks If want a 1-D structure, can use a integer for B and T in: myKernel >>(arg1, … ); B – An integer would define a 1D grid of that size T –An integer would define a 1D block of that size Example myKernel >>(arg1, … );

7 CUDA Built-in Variables for a 1-D grid and 1-D block threadIdx.x -- “thread index” within block in “x” dimension blockIdx.x -- “block index” within grid in “x” dimension blockDim.x -- “block dimension” in “x” dimension (i.e. number of threads in block in x dimension) Full global thread ID in x dimension can be computed by: x = blockIdx.x * blockDim.x + threadIdx.x;

8 Example -- x direction A 1-D grid and 1-D block 4 blocks, each having 8 threads threadIdx.x blockIdx.x = 3 threadIdx.x blockIdx.x = 1blockIdx.x = 0 Derived from Jason Sanders, "Introduction to CUDA C" GPU technology conference, Sept. 20, blockIdx.x = 2 gridDim = 4 x 1 blockDim = 8 x 1 Global thread ID = blockIdx.x * blockDim.x + threadIdx.x = 3 * = thread 26 with linear global addressing Global ID 26

9 #define N 2048 // size of vectors #define T 256 // number of threads per block __global__ void vecAdd(int *a, int *b, int *c) { int i = blockIdx.x*blockDim.x + threadIdx.x; c[i] = a[i] + b[i]; } int main (int argc, char **argv ) { … vecAdd >>(devA, devB, devC); // assumes N/T is an integer … return (0); } Code example with a 1-D grid and blocks Vector addition Number of blocks to map each vector across grid, one element of each vector per thread Note: __global__ CUDA function qualifier. __ is two underscores __global__ must return a void

10 #define N 2048 // size of vectors #define T 240 // number of threads per block __global__ void vecAdd(int *a, int *b, int *c) { int i = blockIdx.x*blockDim.x + threadIdx.x; if (i < N) c[i] = a[i] + b[i]; // allows for more threads than vector elements // some unused } int main (int argc, char **argv ) { int blocks = (N + T - 1) / T; // efficient way of rounding to next integer … vecAdd >>(devA, devB, devC); … return (0); } If T/N not necessarily an integer:

11 1-D grid and 1-D block suitable for processing one dimensional data Higher dimensional grids and blocks convenient for higher dimensional data. Processing 2-D arrays might use a two dimensional grid and two dimensional block Might need higher dimensions because of limitation on sizes of block in each dimension CUDA provided with built-in variables and structures to define number of blocks in grid in each dimension and number of threads in a block in each dimension. Higher dimensional grids/blocks

12 CUDA Vector Types/Structures unit3 and dim3 – can be considered essentially as CUDA-defined structures of unsigned integers: x, y, z, i.e. struct unit3 { x; y; z; }; struct dim3 { x; y; z; }; Used to define grid of blocks and threads, see next. Unassigned structure components automatically set to 1. There are other CUDA vector types. Built-in CUDA data types and structures

13 Built-in Variables for Grid/Block Sizes dim3 gridDim -- Grid dimensions, x, y, z. Number of blocks in grid =gridDim.x * gridDim.y dim3 blockDim -- Size of block dimensions x, y, and z. Number of threads in a block = blockDim.x * blockDim.y * blockDim.z

14 To set dimensions, use for example: dim3 grid(16, 16); // Grid x 16 blocks dim3 block(32, 32); // Block x 32 threads … myKernel >>(...); which sets: gridDim.x = 16 gridDim.y = 16 gridDim.z = 1 blockDim.x = 32 blockDim.y = 32 blockDim.z = 1 Example Initializing Values

15 CUDA Built-in Variables for Grid/Block Indices uint3 blockIdx -- block index within grid: blockIdx.x, blockIdx.y, blockIdx.z uint3 threadIdx -- thread index within block: blockIdx.x, blockIdx.y, blockId.z 2-D: Full global thread ID in x and y dimensions can be computed by: x = blockIdx.x * blockDim.x + threadIdx.x; y = blockIdx.y * blockDim.y + threadIdx.y; CUDA structures

16 2-D Grids and 2-D blocks threadID.x threadID.y Thread blockIdx.x * blockDim.x + threadIdx.x blockIdx.y * blockDim.y + threadIdx.y

17 Flattening arrays onto linear memory Generally memory allocated dynamically on device (GPU) and we cannot not use two-dimensional indices (e.g. a[row][column]) to access array as we might otherwise. (Why?) We will need to know how the array is laid out in memory and then compute the distance from the beginning of the array. C uses row-major order --- rows are stored one after the other in memory, i.e. row 0 then row 1 etc.

18 Flattening an array Number of columns, N column Array element a[row][column] = a[offset] offset = column + row * N where N is number of column in array row * number of columns row 0 0 N-1 Note: Another way to flatten array is: offset = row + column * N We will come back to this later as it does have very significant consequences on performance.

19 int col = blockIdx.x*blockDim.x+threadIdx.x; int row = blockIdx.y*blockDim.y+threadIdx.y; int index = col + row * N; a[index] = … Using CUDA variables

20 Example using 2-D grid and 2-D blocks Adding two arrays Corresponding elements of each array added together to form element of third array

21 CUDA version using 2-D grid and 2-D blocks Adding two arrays #define N 2048 // size of arrays __global__void addMatrix (int *a, int *b, int *c) { int col = blockIdx.x*blockDim.x+threadIdx.x; int row =blockIdx.y*blockDim.y+threadIdx.y; int index = col + row * N; if ( col < N && row < N) c[index]= a[index] + b[index]; } int main() {... dim3 dimBlock (16,16); dim3 dimGrid (N/dimBlock.x, N/dimBlock.y); addMatrix >>(devA, devB, devC); … }

22 Matrix multiplication, C = A x B Example using 2-D grid and 2-D blocks Multiplying two arrays

23 Assume matrices square (N x N matrices). for (i = 0; i < N; i++) for (j = 0; j < N; j++) { sum = 0; for (k = 0; k < N; k++) sum = sum + a[i][k] * b[k][j]; c[i][j] = sum; } Requires n 3 multiplications and n 3 additions Sequential time complexity of O(n 3 ). Very easy to parallelize. Implementing Matrix Multiplication Sequential Code

24 Example using 2-D grid and 2-D blocks Multiplying two arrays __global__ void gpu_matrixmult(int *a, int *b, int *c, int N) { int k, sum = 0; int col = threadIdx.x + blockDim.x * blockIdx.x; int row = threadIdx.y + blockDim.y * blockIdx.y; if(col < N && row < N) { for (k = 0; k < N; k++) sum += a[row * N + k] * b[k * N + col]; c[row * N + col] = sum; } Question: Would this work with 1-D grid and 1-D blocks?

Questions