Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Introduction to Programming with CUDA Paul Richmond

Similar presentations


Presentation on theme: "An Introduction to Programming with CUDA Paul Richmond"— Presentation transcript:

1 An Introduction to Programming with CUDA Paul Richmond GPUComputing@Sheffield http://gpucomputing.sites.sheffield.ac.uk/

2 Motivation Introduction to CUDA Kernels CUDA Memory Management Overview

3 Motivation Introduction to CUDA Kernels CUDA Memory Management Overview

4 Traditional sequential languages are not suitable for GPUs GPUs are data NOT task parallel CUDA allows NVIDIA GPUs to be programmed in C/C++ (also in Fortran) Language extensions for compute “kernels” Kernels execute on multiple threads concurrently API functions provide management (e.g. memory, synchronisation, etc.) About NVIDIA CUDA

5 GPU Kernel Code ____________________ ____________________ Main Program Code ___________________________ GPU Kernel Code ____________________ CPU NVIDIA GPU PCIe BUS DRAMGDRAM

6 Data set decomposed into a stream of elements A single computational function (kernel) operates on each element A thread is the execution of a kernel on one data element Multiple Streaming Multiprocessor Cores can operate on multiple elements in parallel Many parallel threads Suitable for Data Parallel problems Stream Computing …

7 NVIDIA GPUs have a 2-level hierarchy Each Streaming Multiprocessor has multiple cores The number of SMs and cores per SM varies Hardware Model GPU SM Device Memory Shared Memory

8 Hardware abstracted as a Grid of Thread Blocks Blocks map to SMs Each thread maps onto a SM core Don’t need to know the hardware characteristics Oversubscribe and allow the hardware to perform scheduling More blocks than SMs and more threads than cores Code is portable across different GPU versions CUDA Software Model Grid Block Thread

9 CUDA Introduces a new dim types. E.g. dim2, dim3, dim4 dim3 contains a collection of three integers (X, Y, Z) dim3 my_xyz (x_value, y_value, z_value); Values are accessed as members int x = my_xyz.x; CUDA Vector Types

10 threadIdx The location of a thread within a block. E.g. (2,1,0) blockIdx The location of a block within a grid. E.g. (1,0,0) blockDim The dimensions of the blocks. E.g. (3,9,1) gridDim The dimensions of the grid. E.g. (3,2,1) Idx values use zero indices, Dim values are a size Special dim3 Vectors Grid Block Thread

11

12 Students arrive at halls of residence to check in Rooms allocated in order Unfortunately admission rates are down! Only half as many rooms as students Each student can be moved from room i to room 2i so that no-one has a neighbour Analogy

13 Receptionist performs the following tasks 1.Asks each student their assigned room number 2.Works out their new room number 3.Informs them of their new room number Serial Solution

14 “Everybody check your room number. Multiply it by 2 and go to that room” Parallel Solution

15 Motivation Introduction to CUDA Kernels CUDA Memory Management Overview

16 Serial solution for (i=0;i<N;i++){ result[i] = 2*i; } We can parallelise this by assigning each iteration to a CUDA thread! A Coded Example

17 __global__ void myKernel(int *result) { int i = threadIdx.x; result[i] = 2*i; } Replace loop with a “kernel” Use __global__ specifier to indicate it is GPU code Use threadIdx dim variable to get a unique index Assuming for simplicity we have only one block Equivalent to your door number at CUDA Halls of Residence CUDA C Example: Device

18 Call the kernel by using the CUDA kernel launch syntax kernel >>(arguments); dim3 blocksPerGrid(1,1,1); //use only one block dim3 threadsPerBlock(N,1,1); //use N threads in the block myKernel >>(result); CUDA C Example: Host

19 Only one block will give poor performance a block gets allocated to a single SM! Solution: Use multiple blocks dim3 blocksPerGrid(N/256,1,1); // assumes 256 divides N exactly dim3 threadsPerBlock(256,1,1); //256 threads in the block myKernel >>(result); CUDA C Example: Host

20 //Kernel Code __global__ void vectorAdd(float *a, float *b, float *c) { int i = blockIdx.x * blockDim.x + threadIdx.x; c[i] = a[i] + b[i]; } //Host Code... dim3 blocksPerGrid(N/256,1,1); //assuming 256 divides N exactly dim3 threadsPerBlock(256,1,1); vectorAdd >>(a, b, c); Vector Addition Example

21 //Device Code __global__ void matrixAdd(float a[N][N], float b[N][N], float c[N][N]) { int j = blockIdx.x * blockDim.x + threadIdx.x; int i = blockIdx.y * blockDim.y + threadIdx.y; c[i][j] = a[i][j] + b[i][j]; } //Host Code... dim3 blocksPerGrid(N/16,N/16,1); // (N/16)x(N/16) blocks/grid (2D) dim3 threadsPerBlock(16,16,1); // 16x16=256 threads/block (2D) matrixAdd >>(a, b, c); A 2D Matrix Addition Example

22 Motivation Introduction to CUDA Kernels CUDA Memory Management Overview

23 GPU has separate dedicated memory from the host CPU Data accessed in kernels must be on GPU memory Data must be explicitly copied and transferred cudaMalloc() is used to allocate memory on the GPU cudaFree() releases memory float *a; cudaMalloc(&a, N*sizeof(float));... cudaFree(a); Memory Management

24 Once memory has been allocated we need to copy data to it and from it. cudaMemcpy() transfers memory from the host to device to host and vice versa cudaMemcpy(array_device, array_host, N*sizeof(float), cudaMemcpyHostToDevice); cudaMemcpy(array_host, array_device, N*sizeof(float), cudaMemcpyDeviceToHost); First argument is always the destination of transfer Transfers are relatively slow and should be minimised where possible Memory Copying

25 Kernel calls are non-blocking Host continues after kernel launch Overlaps CPU and GPU execution cudaThreadSynchronise() call be called from the host to block until GPU kernels have completed vectorAdd >>(a, b, c); //do work on host (that doesn’t depend on c) cudaThreadSynchronise(); //wait for kernel to finish Standard cudaMemcpy calls are blocking Non-blocking variants exist Synchronisation

26 syncthreads() can be used within a kernel to synchronise between threads in a block Threads in the same block can therefore communicate using a shared memory space if (threadIdx.x == 0) array[0]=x; syncthreads(); if (threadIdx.x == 1) x=array[0]; It is NOT possible to synchronise between threads in different blocks A kernel exit does however guarantee synchronisation Synchronisation Between Threads

27 CUDA C Code is compiled using nvcc e.g. Will compile host AND device code to produce an executable nvcc –o example example.cu Compiling a CUDA program

28 Traditional languages alone are not sufficient for programming GPUs CUDA allows NVIDIA GPUs to be programmed using C/C++ defines language extensions and APIs to enable this We introduced the key CUDA concepts and gave examples Kernels for the device Host API for memory management and kernel launching Now lets try it out… Summary


Download ppt "An Introduction to Programming with CUDA Paul Richmond"

Similar presentations


Ads by Google