Download presentation
Presentation is loading. Please wait.
1
GPU Computing with CUDA
Dan Negrut Simulation-Based Engineering Lab Wisconsin Applied Computing Center Department of Mechanical Engineering Department of Electrical and Computer Engineering University of Wisconsin-Madison Milano 10-14 December 2012 © Dan Negrut, 2012 UW-Madison
2
Before We Get Started… Goal, GPU segment Reaching this goal
Spend next three days getting familiar with GPU computing using CUDA Understand whether you can put GPU computing and CUDA to good use Reaching this goal Cover some basics (one day) and more advanced features (a second day) Talk about library support & productivity tools (the third day) thrust, cuda-gdb, nvvp
3
HPC: Where Are We Today. [Info lifted from Top500 website: http://www
3
4
Where Are We Today? [Cntd.]
Abbreviations/Nomenclature MPP – Massively Parallel Processing Constellation – subclass of cluster architecture envisioned to capitalize on data locality MIPS – “Microprocessor without Interlocked Pipeline Stages”, a chip design of the MIPS Computer Systems of Sunnyvale, California SPARC – “Scalable Processor Architecture” is a RISC instruction set architecture developed by Sun Microsystems (now Oracle) and introduced in mid-1987 Alpha - a 64-bit reduced instruction set computer (RISC) instruction set architecture developed by DEC (Digital Equipment Corporation was sold to Compaq, which was sold to HP) 4
5
Where Are We Today? [Cntd.]
How is the speed measured to put together the Top500? Basically reports how fast you can solve a dense linear system 5
6
Some Trends… Consequence of Moore’s law
Transition from a speed-based compute paradigm to a concurrency-based compute paradigm Amount of power for supercomputers is a showstopper Example: Exaflop/s rate: the goal is to reach it by 2018 Budget constraints: must be less than $200 million Power constraints: must require less than 20 MW hour Putting things in perspective: Japan’s fastest computer (Kei): 12.7 MW for 10.5 Petaflop/s China’s fastest supercomputer (Tianhe-1A): 4.0 MW for 2.6 Petaflop/s US fastest supercomputer (Oak Ridge Jaguar’s ): 8.3 MW for 17.6 Petaflop/s Faster machine for less power: the advantage of GPU computing 6
7
GPU Computing is Power Efficient
One Kepler card: Gflop/W Japan’s Kei: Gflops/W China’s Tianhe-1A: Gflops/W USA’s Jaguar: Gflops/W Best HPC cluster performance - IBM's NNSA/SC Blue Gene/Q Prototype 2: GFlops/W Currently the world's 109th-fastest supercomputer If we are to reach exascale by 2018: 5-50 Gflops/W
8
Why GPU Computing? It’s fast for a variety of jobs
Really good for data parallelism (which requires SIMD) However, not impressive for task parallelism (which requires MIMD) It’s cheap to get one ($120 to $500) High end GPUs for Scientific Computing are more like $1500 GPUs are everywhere There is incentive to produce software since there are many potential users of it… More than 300 million NVIDIA CUDA enabled cards NOTE: GPU computing is not quite High Performance Computing (HPC) However, it shares with HPC the aspect that they both draw on parallel programming OpenGL v1.3+, DirectX v9+ or AMD's Close to Metal 8
9
IBM BlueGene/L 445-teraflops Blue Gene/P, Argonne National Lab 9
Entry model: 1024 dual core nodes 5.7 Tflop/s Linux OS Dedicated power management solution Dedicated IT support Price (2007): $1.4 million 9
10
Euler: Heterogeneous Cluster Used in This Tutorial
University of Wisconsin-Madison
11
University of Wisconsin-Madison
Euler, Quick Overview More than 25,000 GPU scalar processors Can manage about 75,000 GPU parallel threads at full capacity More than 1000 CPU cores Mellanox Infiniband Interconnect, 40Gb/sec About 2.7 TB of RAM More than 20 Tflops DP … The issues is not hardware availability. Rather, it is producing modeling and solution techniques that can leverage this hardware University of Wisconsin-Madison
12
Amdahl's Law Excerpt from “Validity of the single processor approach to achieving large scale computing capabilities,” by Gene M. Amdahl, in Proceedings of the “AFIPS Spring Joint Computer Conference,” pp. 483, 1967 “A fairly obvious conclusion which can be drawn at this point is that the effort expended on achieving high parallel processing rates is wasted unless it is accompanied by achievements in sequential processing rates of very nearly the same magnitude” 12
13
Amdahl’s Law [Cntd.] Sometimes called the law of diminishing returns
In the context of parallel computing used to illustrate how going parallel with a part of your code is going to lead to overall speedups The art is to find for the same problem an algorithm that has a large rp Sometimes requires a completely different angle of approach for a solution Nomenclature Algorithms for which rp=1 are called “embarrassingly parallel” 13
14
Example: Amdahl's Law Suppose that a program spends 60% of its time in I/O operations, pre and post-processing The rest of 40% is spent on computation, most of which can be parallelized Assume that you buy a multicore chip and can throw 6 parallel threads at this problem. What is the maximum amount of speedup that you can expect given this investment? Asymptotically, what is the maximum speedup that you can ever hope for? [DN] Answer: 1.5X ( )/( /6) Asymptotically: 100/60 = 5/3=1.666 X 14
15
Computational Science and Engineering
Old School New School Increasing clock frequency is primary method of performance improvement Don’t count on frequency increases as main driver of your performance improvement Don’t bother parallelizing an application, parallel computing is odd and expensive Nobody builds one core processors anymore. Processors’ parallelism is the de-facto method for performance improvement. Given the switch to parallel hardware, even sub-linear speedups are beneficial as long as you beat the sequential Less than linear scaling for a multiprocessor is failure
16
A Word on “Scaling” [important to understand]
Algorithmic Scaling of a solution algorithm You only have a mathematical solution algorithm at this point Refers to how the effort required by the solution algorithm scales with the size of the problem Examples: Naïve implementation of the N-body problem scales like O(N2), where N is the number of bodies Sophisticated algorithms scale like O(N¢logN) Gauss elimination scales like the cube of the number of unknowns in your linear system Implementation Scaling on a certain architecture Intrinsic Scaling: how the wall-clock run time changes with an increase in the size of the problem Strong Scaling: how the wall-clock run time changes when you increase the processing resources Weak Scaling: how the wall-clock run time changes when you increase the problem size but also the processing resources accordingly Relative relevance: strong and intrinsic more relevant than weak A thing you should worry about: is the Intrinsic Scaling similar to the Algorithmic Scaling? If Intrinsic Scaling significantly worse than Algorithmic Scaling: You might have an algorithm that thrashes the memory badly, or You might have a sloppy implementation of the algorithm 16
17
Layout of Typical Hardware Architecture
CPU (the “host”) GPU w/ local DRAM (the “device”) In PCIe 1.1 (currently the most common version) each lane sends information at a rate of 250 MB/s (250 million bytes per second) in each direction. PCIe 2.0 doubles this data rate, introduced in late 2007, PCIe 2.0 is found on newer systems such as those based around the Intel X38 or AMD 780G chipsets. The latest proposed PCIe 3.0 standard will increase the speed of the links further (tentatively scheduled for release around 2010).[1] Each PCIe slot carries either one, two, four, eight, sixteen or thirty-two lanes of data between the motherboard and the addin card. Lane counts are written with an "x" prefix e.g. x1 for a single-lane card and x16 for a sixteen-lane card. Thirty-two lanes of 250 MB/s (PCIe 1.1) gives a maximum transfer rate of 8 GB/s (250 MB/s x 32, i.e., 8 billion bytes per second) in each direction. However the largest size in common use for PCIe 1.1 is x16, giving a transfer rate of 4 GB/s (250 MB/s x 16) in each direction. Putting this into perspective, a single lane for PCIe 1.1 has nearly twice the data rate of normal PCI, a four-lane slot has a transfer rate comparable to the fastest version of the old parallel PCI-X 1.0, and an eight-lane slot has a transfer rate comparable to the fastest version of AGP. However the data rates cited must be derated because 8b/10b coding is used in the physical layer. The link transfer speeds cited are to be considered maximum theoretical data rates. PCIe slots come in a variety of physically different sizes referred to by the maximum lane count they support, ie. x1, x2, x4, x8, x16 and x32. A PCIe card will fit into a slot of its size or bigger, but not into a smaller PCIe slot. 17 Wikipedia
18
Bandwidth in a CPU-GPU System
18 Robert Strzodka, Max Plank Institute 18
19
Key Parameters GPU, CPU GPU – NVIDIA Tesla C2050
CPU – Intel core I7 975 Extreme Processing Cores 448 4 (8 threads) Memory 3 GB - 32 KB L1 cache / core - 256 KB L2 (I&D)cache / core - 8 MB L3 (I&D) shared by all cores Clock speed 1.15 GHz 3.20 GHz Memory bandwidth 140 GB/s 25.6 GB/s Floating point operations/s 515 x 109 Double Precision 70 x 109 19
20
GPU vs. CPU – Memory Bandwidth [GB/sec]
21
CPU vs. GPU – Flop Rate (GFlops)
Single Precision Double Precision GFlop/Sec
22
More Up-to-Date, DP Figures…
Source: Revolutionizing High Performance Computing / Nvidia Tesla
23
What is the GPU so Fast? The GPU is specialized for compute-intensive, highly data parallel computation (owing to its graphics rendering origin) More transistors can be devoted to data processing rather than data caching and control flow Where are GPUs good: high arithmetic intensity (the ratio between arithmetic operations and memory operations) The fast-growing video game industry exerts strong economic pressure that forces constant innovation Cache ALU Control DRAM DRAM CPU GPU 23
24
CUDA: Making the GPU Tick…
“Compute Unified Device Architecture” – freely distributed by NVIDIA It enables a general purpose programming model User kicks off batches of threads on the GPU to execute a function (kernel) Targeted software stack Scientific computing oriented drivers, language, and tools Driver for loading computation programs into GPU Standalone Driver - Optimized for computation Interface designed for compute and graphics-free API Explicit GPU memory management 24
25
CUDA Programming Model: A Highly Multithreaded Coprocessor
The GPU is viewed as a compute device that: Is a co-processor to the CPU or host Has its own DRAM (device memory, or global memory in CUDA parlance) Runs many threads in parallel Data-parallel portions of an application run on the device as kernels which are executed in parallel by many threads Differences between GPU and CPU threads GPU threads are extremely lightweight Very little creation overhead GPU needs 1000s of threads for full efficiency Multi-core CPU needs only a few heavy ones 25 HK-UIUC
26
Next Two Slides Are Important
27
GPU: Underlying Hardware
NVIDIA nomenclature used below reminiscent of GPU’s mission The hardware organized as follows: One Stream Processor Array (SPA)… … has a collection of Texture Processor Clusters (TPC, ten of them on C1060) … …and each TPC has three Stream Multiprocessors (SM) … …and each SM is made up of eight Stream or Scalar Processor (SP) 27
28
NVIDIA TESLA C1060 240 Scalar Processors 4 GB device memory
Memory Bandwidth: 102 GB/s Clock Rate: 1.3GHz Approx. $1,250 The most important component of a GPU is the SM (Stream Multiprocessor) It is the quantum of scalability 28
29
Compute Capability [of a Device] vs. CUDA Version
“Compute Capability of a Device” refers to hardware Defined by a major revision number and a minor revision number Example: Newton’s Tesla C1060 is compute capability 1.3 Tesla C2050 is compute capability 2.0 The major revision number is up to 3 (Kepler architecture) The minor revision number indicates incremental changes within an architecture class A higher compute capability indicates an more able piece of hardware The “CUDA Version” indicates what version of the software you are using to run on the hardware Right now, the most recent version of CUDA is 5.0 The best setup You run the most recent CUDA (version 5.0) software release You use the most recent architecture (compute capability 3.0) 29
30
Number of Multiprocessors
NVIDIA CUDA Devices CUDA-Enabled Devices with Compute Capability, Number of Multiprocessors, and Number of CUDA Cores Card Compute Capability Number of Multiprocessors Number of CUDA Cores GTX 690 3.0 2x8 2x1536 GTX 680 8 1536 GTX 670 2.1 7 1344 GTX 590 2x16 2x512 GTX 560TI 384 GTX 460 336 GTX 470M 6 288 GTS 450, GTX 460M 4 192 GT 445M 3 144 GT 435M, GT 425M, GT 420M 2 96 GT 415M 1 48 GTX 490 2.0 2x15 2x480 GTX 580 16 512 GTX 570, GTX 480 15 480 GTX 470 14 448 GTX 465, GTX 480M 11 352 GTX 295 1.3 2x30 2x240 GTX 285, GTX 280, GTX 275 30 240 GTX 260 24 9800 GX2 1.1 2x128 GTS 250, GTS 150, 9800 GTX, 9800 GTX+, 8800 GTS 512, GTX 285M, GTX 280M 128 8800 Ultra, 8800 GTX 1.0 9800 GT, 8800 GT 112 30
31
The CUDA Execution Model
32
GPU Computing – The Basic Idea
The GPU is linked to the CPU by a reasonably fast connection The idea is to use the GPU as a co-processor Farm out big parallel tasks to the GPU Keep the CPU busy with the control of the execution and “corner” tasks Vertex shader Geometry shader Pixel shader 32
33
The CUDA Way: Extended C
Declaration specifications: global, device, shared, local, constant Keywords threadIdx, blockIdx Intrinsics __syncthreads Runtime API For memory and execution management Kernel launch __device__ float filter[N]; __global__ void convolve (float *image) { __shared__ float region[M]; ... region[threadIdx.x] = image[i]; __syncthreads() image[j] = result; } // Allocate GPU memory void *myimage = cudaMalloc(bytes) // 100 blocks, 10 threads per block convolve<<<100, 10>>> (myimage); 33 HK-UIUC
34
Example: Hello World! Note the “cu” suffix
int main(void) { printf("Hello World!\n"); return 0; } Note the “cu” suffix Output, on Euler: $ nvcc hello_world.cu $ a.out Hello World! $ Standard C that runs on the host NVIDIA compiler (nvcc) can be used to compile programs with no device code [NVIDIA]→
35
Compiling CUDA Source files with CUDA language extensions must be compiled with nvcc You spot such a file by its .cu suffix Example: >> nvcc -arch=sm_20 foo.cu Actually, nvcc is a compile driver Works by invoking all the necessary tools and compilers like g++, cl, ... nvcc can output: C code Must then be compiled with the rest of the application using another tool ptx code (CUDA’s ISA) Or directly object code (cubin) 35
36
Hello World! with Device Code
__global__ void mykernel(void) { } int main(void) { mykernel<<<1,1>>>(); printf("Hello World!\n"); return 0; Two new syntactic elements… [NVIDIA]→
37
Hello World! with Device Code
__global__ void mykernel(void) { } CUDA C/C++ keyword __global__ indicates a function that: Runs on the device Is called from host code nvcc separates source code into host and device components Device functions, e.g. mykernel(), processed by NVIDIA compiler Host functions, e.g. main(), processed by standard host compiler gcc, cl.exe [NVIDIA]→
38
Hello World! with Device Code
mykernel<<<1,1>>>(); Triple angle brackets mark a call from host code to device code Also called a “kernel launch” NOTE: we’ll return to the parameters (1,1) soon That’s all that is required to execute a function on the GPU… [NVIDIA]→
39
Hello World! with Device Code
__global__ void mykernel(void) { } int main(void) { mykernel<<<1,1>>>(); printf("Hello World!\n"); return 0; Output, on Euler: $ nvcc hello.cu $ a.out Hello World! $ Actually, mykernel() does not do anything yet... [NVIDIA]→
40
30,000 Feet Perspective This is how the code gets executed on the hardware in heterogeneous computing. GPU calls are asynchronous… This is how your C code looks like 40
41
Languages Supported in CUDA
Note that everything is done in C Yet minor extensions are needed to flag the fact that a function actually represents a kernel, that there are functions that will only run on the device, etc. You end up working in “C with extensions” FOTRAN is supported, we’ll not cover here though There is support for C++ programming (operator overload, new/delete, etc.) Not fully supported yet 41
42
CUDA Function Declarations (the “C with extensions” part)
Executed on the: Only callable from the: __device__ float myDeviceFunc() device __global__ void myKernelFunc() host __host__ float myHostFunc() __global__ defines a kernel function, launched by host, executed on the device Must return void For a full list, see CUDA Reference Manual 42
43
The Concept of Execution Configuration
A kernel function must be called with an execution configuration: __global__ void kernelFoo(...); // declaration dim3 DimGrid(100, 50); // 5000 thread blocks dim3 DimBlock(4, 8, 8); // 256 threads per block kernelFoo<<< DimGrid, DimBlock>>>(...your arg list comes here…); Any call to a kernel function is asynchronous By default, execution on host doesn’t wait for kernel to finish 43
44
Example The host call below instructs the GPU to execute the function (kernel) “foo” using 25,600 threads Two arguments are passed down to each thread executing the kernel “foo” In this execution configuration, the host instructs the device that it is supposed to run 100 blocks each having 256 threads in it The concept of block is important since it represents the entity that gets executed by an SM (stream multiprocessor) 44
45
More on the Execution Model [Some Constraints]
There is a limitation on the number of blocks in a grid: The grid of blocks can be organized as a 3D structure: max of by by grid of blocks (about 280,000 billion blocks) Threads in each block: The threads can be organized as a 3D structure (x,y,z) The total number of threads in each block cannot be larger than 1024 45
46
Block and Thread Index (Idx)
Threads and blocks have indices Used by each thread the decide what data to work on Block Index: a pair of uint Thread Index: a triplet of three uint Why this 3D layout? Simplifies memory addressing when processing multidimensional data Handling matrices Solving PDEs on subdomains … Device Grid 1 Block (0, 0) (1, 0) (2, 0) (0, 1) (1, 1) (2, 1) Block (1, 1) Thread (3, 1) (4, 1) (0, 2) (1, 2) (2, 2) (3, 2) (4, 2) (3, 0) (4, 0) Courtesy: NVIDIA 46
47
A Couple of Built-In Variables [Critical in supporting the SIMD parallel computing paradigm]
It’s essential for each thread to be able to find out the grid and block dimensions and the block and thread indices Each thread when executing a *device* function has access to the following built-in variables threadIdx (uint3) – contains the thread index within a block blockDim (dim3) – contains the dimension of the block blockIdx (uint3) – contains the block index within the grid gridDim (dim3) – contains the dimension of the grid [ warpSize (uint) – provides warp size, we’ll talk about this later… ] 47
48
Thread Index vs. Thread ID [critical in understanding how SIMD is supported in CUDA & understanding the concept of “warp”] 48
49
A Recurring Theme in CUDA Programming [and in SIMD in general]
Imagine you are one of many threads, and you have your thread index and block index You need to figure out what is the work you need to do Just like we did on previous slide, where thread 5 in block 2 mapped into 21 You have to make sure you actually need to do that work In many cases there are threads, typically of large id, that need to do no work Example: you launch two blocks with 512 threads but your array is only 1000 elements long. Then 24 threads at the end do nothing 49 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA
50
CUDA, Simple Example
51
Review - Execution Configuration: Grids and Blocks
Host Kernel 1 Kernel 2 Device Grid 1 Block (0, 0) (1, 0) (2, 0) (0, 1) (1, 1) (2, 1) Grid 2 Block (1, 1) Thread (3, 1) (4, 1) (0, 2) (1, 2) (2, 2) (3, 2) (4, 2) (3, 0) (4, 0) A kernel is executed as a grid of blocks of threads All threads in a kernel can access several device data memory spaces A block [of threads] is a batch of threads that can cooperate with each other by: Synchronizing their execution Efficiently sharing data through a low latency shared memory Exercise: How was the grid defined for this pic? I.e., how many blocks in X and Y directions? How was a block defined in this pic? Block: (3,2) Thread: (5,3) 51 [NVIDIA]→
52
Example: Adding Two Matrices
You have two matrices A and B of dimension N£N (N=32) You want to compute C=A+B in parallel Code provided below (some details omitted, such as #define N 32) 52
53
Something to think about…
Given that the device operates with groups of threads of consecutive ID, and given the scheme a few slides ago to compute a thread ID based on the thread & block index, is the array indexing scheme on the previous slide good or bad? The “good or bad” refers to how data is accessed in the device’s global memory In other words should we have or… Answer: C[j][i] = A[j][i] + B[j][i] C[i][j] = A[i][j] + B[i][j] C[j][i] = A[j][i] + B[j][i]
54
Example: Array Indexing
Purpose of Example: see a scenario of how multiple blocks are used to index entries in an array Recall that there is a limit on the number of threads you can have in a block In the vast majority of applications you need to use many blocks, each containing the same number of threads
55
Example: Array Indexing
[Important to Grasp] No longer as simple as using only threadIdx.x Consider indexing into an array, one thread accessing one element Assume you have M=8 threads/block and the array has 32 entries threadIdx.x 1 7 2 3 4 5 6 Identical to finding offset in 1-dimensional storage of a 2-dimensional matrix: int index = x + width * y; blockIdx.x = 0 blockIdx.x = 1 blockIdx.x = 2 blockIdx.x = 3 With M threads/block a unique index for each thread is given by: int index = threadIdx.x + blockIdx.x * M; [NVIDIA]→
56
Example: Array Indexing
1 31 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 What will be the array entry that thread of index 5 in block of index 2 will work on? M = 8 threadIdx.x = 5 1 7 2 3 4 5 6 blockIdx.x = 2 int index = threadIdx.x + blockIdx.x * M; = * 8; = 21; 1 31 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 [NVIDIA]→
57
Example: Timing Your Application
Timing support – part of the CUDA API You pick it up as soon as you include <cuda.h> Why it is good to use Provides cross-platform compatibility Deals with the asynchronous nature of the device calls by relying on events and forced synchronization Reports time in miliseconds, accurate within 0.5 microseconds From NVIDIA CUDA Library Documentation: Computes the elapsed time between two events (in milliseconds with a resolution of around 0.5 microseconds). If either event has not been recorded yet, this function returns cudaErrorInvalidValue. If either event has been recorded with a non-zero stream, the result is undefined.
58
Timing Example ~ Timing a query of device 0 properties ~
#include<iostream> #include<cuda.h> int main() { cudaEvent_t startEvent, stopEvent; cudaEventCreate(&startEvent); cudaEventCreate(&stopEvent); cudaEventRecord(startEvent, 0); cudaDeviceProp deviceProp; const int currentDevice = 0; if (cudaGetDeviceProperties(&deviceProp, currentDevice) == cudaSuccess) printf("Device %d: %s\n", currentDevice, deviceProp.name); cudaEventRecord(stopEvent, 0); cudaEventSynchronize(stopEvent); float elapsedTime; cudaEventElapsedTime(&elapsedTime, startEvent, stopEvent); std::cout << "Time to get device properties: " << elapsedTime << " ms\n"; cudaEventDestroy(startEvent); cudaEventDestroy(stopEvent); return 0; }
59
The CUDA API
60
What Is an API? Application Programming Interface (API)
A set of functions, procedures or classes that an operating system, library, or service provides to support requests made by computer programs (from Wikipedia) Example: OpenGL, a graphics library, has its own API that allows one to draw a line, rotate it, resize it, etc. In this context, CUDA provides an API that enables you to tap into the computational resources of the NVIDIA’s GPUs This is what replaced the old GPGPU way of programming the hardware CUDA API is exposed to you (the user) through a collection of header files 60 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA
61
Talking about the API: The C CUDA Software Stack
Image at right indicates where the API fits in the picture An API layer is indicated by a thick red line: An API layer is indicated by a thick red line NOTE: any CUDA runtime function has a name that starts with “cuda” Examples: cudaMalloc, cudaFree, cudaMemcpy, etc. Examples of CUDA Libraries: CUFFT, CUBLAS, CUSP, thrust, etc. 61
62
CUDA API: Device Memory Allocation [Note: picture assumes two blocks, each with two threads]
cudaMalloc() Allocates object in the device Global Memory Requires two parameters Address of a pointer to the allocated object Size of allocated object cudaFree() Frees object from device Global Memory Pointer to freed object (Device) Grid Constant Memory Texture Global Block (0, 0) Shared Memory Local Thread (0, 0) Registers Thread (1, 0) Block (1, 0) Host 62 HK-UIUC
63
Example Use: A Matrix Data Type
NOT part of CUDA API, but discussed here since used in several code examples New type abstracts the following concept: 2 D matrix Single precision float elements width * height entries Matrix entries attached to the pointer-to-float member called “elements” Matrix is stored row-wise typedef struct { int width; int height; float* elements; } Matrix; 63
64
Example CUDA Device Memory Allocation (cont.)
Code example: Allocate a 32 * 32 single precision float array Attach the allocated storage to Md.elements “d” in “Md” is often used to indicate a device data structure Question: Why did they design the cudaMalloc prototype to look like this: cudaMalloc((void**) , size_t); BLOCK_SIZE = 32; Matrix Md; int size = BLOCK_SIZE * BLOCK_SIZE * sizeof(float); cudaMalloc((void**)&Md.elements, size); … //use it for what you need, then free the device memory cudaFree(Md.elements); 64 HK-UIUC
65
CUDA Host-Device Data Transfer
(Device) Grid Constant Memory Texture Global Block (0, 0) Shared Memory Local Thread (0, 0) Registers Thread (1, 0) Block (1, 0) Host cudaMemcpy() memory data transfer Requires four parameters Pointer to source Pointer to destination Number of bytes copied Type of transfer Host to Host Host to Device Device to Host Device to Device 65 HK-UIUC
66
CUDA Host-Device Data Transfer (cont.)
Code example: Transfer a 32 * 32 single precision float array M is in host memory and Md is in device memory cudaMemcpyHostToDevice and cudaMemcpyDeviceToHost are symbolic constants cudaMemcpy(Md.elements, M.elements, size, cudaMemcpyHostToDevice); cudaMemcpy(M.elements, Md.elements, size, cudaMemcpyDeviceToHost); 66 HK-UIUC
67
Simple Example: Matrix Multiplication
A straightforward matrix multiplication example that illustrates the basic features of memory and thread management in CUDA programs Use only global memory (don’t bring shared memory into picture yet) Concentrate on Thread ID usage Memory data transfer API between host and device Assume all matrices are square, of dimension WIDTH=32 67 HK-UIUC
68
Square Matrix Multiplication Example
Compute P = M * N The matrices P, M, N are of size WIDTH x WIDTH Software Design Decisions: One thread handles one element of P Each thread will access all the entries in one row of M and one column of N 2*WIDTH read accesses to global memory One write access to global memory N WIDTH M P WIDTH 68 WIDTH WIDTH
69
Multiply Using One Thread Block
Grid 1 One Block of threads computes matrix P Each thread computes one element of P Each thread Loads a row of matrix M Loads a column of matrix N Perform one multiply and addition for each pair of M and N elements Compute to off-chip memory access ratio close to 1:1 Not that good, acceptable for now… Size of matrix limited by the number of threads allowed in a thread block Block 1 Thread (2, 2) 48 M P width 69 HK-UIUC
70
Matrix Multiplication: Traditional Approach, Coded in C
// Matrix multiplication on the (CPU) host in double precision; void MatrixMulOnHost(const Matrix M, const Matrix N, Matrix P) { for (int i = 0; i < M.height; ++i) { for (int j = 0; j < N.width; ++j) { double sum = 0; for (int k = 0; k < M.width; ++k) { double a = M.elements[i * M.width + k]; //march along a row of M double b = N.elements[k * N.width + j]; //march along a column of N sum += a * b; } P.elements[i * N.width + j] = sum; 70
71
Step 1: Matrix Multiplication, Host-side. Main Program Code
int main(void) { // Allocate and initialize the matrices. // The last argument in AllocateMatrix: should an initialization with // random numbers be done? Yes: 1. No: 0 (everything is set to zero) Matrix M = AllocateMatrix(WIDTH, WIDTH, 1); Matrix N = AllocateMatrix(WIDTH, WIDTH, 1); Matrix P = AllocateMatrix(WIDTH, WIDTH, 0); // M * N on the device MatrixMulOnDevice(M, N, P); // Free matrices FreeMatrix(M); FreeMatrix(N); FreeMatrix(P); return 0; } 71 HK-UIUC
72
Step 2: Matrix Multiplication [host-side code]
void MatrixMulOnDevice(const Matrix& M, const Matrix& N, Matrix& P) { // Load M and N to the device Matrix Md = AllocateDeviceMatrix(M); CopyToDeviceMatrix(Md, M); Matrix Nd = AllocateDeviceMatrix(N); CopyToDeviceMatrix(Nd, N); // Allocate P on the device Matrix Pd = AllocateDeviceMatrix(P); // Setup the execution configuration dim3 dimGrid(1, 1, 1); dim3 dimBlock(WIDTH, WIDTH); // Launch the kernel on the device MatrixMulKernel<<<dimGrid, dimBlock>>>(Md, Nd, Pd); // Read P from the device CopyFromDeviceMatrix(P, Pd); // Free device matrices FreeDeviceMatrix(Md); FreeDeviceMatrix(Nd); FreeDeviceMatrix(Pd); } Continue here… 72 HK-UIUC
73
Step 4: Matrix Multiplication- Device-side Kernel Function
// Matrix multiplication kernel – thread specification __global__ void MatrixMulKernel(Matrix M, Matrix N, Matrix P) { // 2D Thread Index; computing P[ty][tx]… int tx = threadIdx.x; int ty = threadIdx.y; // Pvalue will end up storing the value of P[ty][tx]. // That is, P.elements[ty * P. width + tx] = Pvalue float Pvalue = 0; for (int k = 0; k < M.width; ++k) { float Melement = M.elements[ty * M.width + k]; float Nelement = N.elements[k * N. width + tx]; Pvalue += Melement * Nelement; } // Write matrix to device memory; each thread one element P.elements[ty * P. width + tx] = Pvalue; N WIDTH M P tx WIDTH ty 73 WIDTH WIDTH
74
Step 4: Some Loose Ends // Allocate a device matrix of same size as M. Matrix AllocateDeviceMatrix(const Matrix& M) { Matrix Mdevice = M; int size = M.width * M.height * sizeof(float); cudaMalloc((void**)&Mdevice.elements, size); return Mdevice; } // Copy a host matrix to a device matrix. void CopyToDeviceMatrix(Matrix Mdevice, const Matrix Mhost) { int size = Mhost.width * Mhost.height * sizeof(float); cudaMemcpy(Mdevice.elements, Mhost.elements, size, cudaMemcpyHostToDevice); // Copy a device matrix to a host matrix. void CopyFromDeviceMatrix(Matrix Mhost, const Matrix Mdevice) { int size = Mdevice.width * Mdevice.height * sizeof(float); cudaMemcpy(Mhost.elements, Mdevice.elements, size, cudaMemcpyDeviceToHost); // Free a device matrix. void FreeDeviceMatrix(Matrix M) { cudaFree(M.elements); void FreeMatrix(Matrix M) { free(M.elements); 74 HK-UIUC
75
Application Programming Interface (API) ~ Wrapping it Up ~
CUDA runtime API: exposes a set of extensions to the C language It consists of: Language extensions To target portions of the code for execution on the device A runtime library, which is split into: A common component providing built-in vector types and a subset of the C runtime library available in both host and device codes Callable both from device and host A host component to control and access devices from the host Callable from the host only A device component providing device-specific functions Callable from the device only 75
78
Overview of Large Multiprocessor Hardware Configurations
For the Microprocessor this is measured in MIPS (million instructions per second). Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches, whereas realistic workloads typically lead to significantly lower IPS values. Euler Courtesy of Elsevier, Computer Architecture, Hennessey and Patterson, fourth edition 78
79
Parallel Computing on a GPU
NVIDIA GPU Computing Architecture Via a separate HW interface In laptops, desktops, workstations, servers Tesla C2050 delivers Tflops in double precision Multithreaded SIMT model uses application data parallelism and thread parallelism Programmable in C with CUDA tools “Extended C” Tesla C2050 Tesla C1060 79
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.