Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University.

Slides:



Advertisements
Similar presentations
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
Advertisements

Intermediate GPGPU Programming in CUDA
INF5063 – GPU & CUDA Håkon Kvale Stensland iAD-lab, Department for Informatics.
Complete Unified Device Architecture A Highly Scalable Parallel Programming Framework Submitted in partial fulfillment of the requirements for the Maryland.
COMPUTER GRAPHICS CS 482 – FALL 2014 NOVEMBER 10, 2014 GRAPHICS HARDWARE GRAPHICS PROCESSING UNITS PARALLELISM.
Lecture 38: Chapter 7: Multiprocessors Today’s topic –Vector processors –GPUs –An example 1.
GPU History CUDA. Graphics Pipeline Elements 1. A scene description: vertices, triangles, colors, lighting 2.Transformations that map the scene to a camera.
GPU programming: CUDA Acknowledgement: the lecture materials are based on the materials in NVIDIA teaching center CUDA course materials, including materials.
Prepared 5/24/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Using.
1 Threading Hardware in G80. 2 Sources Slides by ECE 498 AL : Programming Massively Parallel Processors : Wen-Mei Hwu John Nickolls, NVIDIA.
1 CIS 665: GPU Programming Lecture 2: The CUDA Programming Model.
Programming with CUDA, WS09 Waqar Saleem, Jens Müller Programming with CUDA and Parallel Algorithms Waqar Saleem Jens Müller.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Chapter.
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 19, 2011 Emergence of GPU systems and clusters for general purpose High Performance Computing.
CUDA Programming Lei Zhou, Yafeng Yin, Yanzhi Ren, Hong Man, Yingying Chen.
1 Threading Hardware in G80. 2 Sources Slides by ECE 498 AL : Programming Massively Parallel Processors : Wen-Mei Hwu John Nickolls, NVIDIA.
Introduction What is GPU? It is a processor optimized for 2D/3D graphics, video, visual computing, and display. It is highly parallel, highly multithreaded.
© David Kirk/NVIDIA and Wen-mei W. Hwu, , SSL 2014, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 7: Threading Hardware in G80.
GPU History CUDA Intro. Graphics Pipeline Elements 1. A scene description: vertices, triangles, colors, lighting 2.Transformations that map the scene.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, 2008 Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors CUDA.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors CUDA Threads.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core processors for Science and Engineering.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
CUDA in a Nutshell Roger Hoang. But first… Cluster and Lab Details Contact Information.
1 ECE 498AL Lecture 2: The CUDA Programming Model © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL Spring 2010, University of Illinois, Urbana-Champaign.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395 Winter 2014 Lecture 17 Introduction to Accelerator.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 GPU Programming with CUDA.
CUDA - 2.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Threading Hardware in G80.
Introduction What is GPU? It is a processor optimized for 2D/3D graphics, video, visual computing, and display. It is highly parallel, highly multithreaded.
Jie Chen. 30 Multi-Processors each contains 8 cores at 1.4 GHz 4GB GDDR3 memory offers ~100GB/s memory bandwidth.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
Parallel Programming Basics  Things we need to consider:  Control  Synchronization  Communication  Parallel programming languages offer different.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign ECE408 / CS483 Applied Parallel Programming.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana Champaign 1 Programming Massively Parallel Processors CUDA Memories.
© David Kirk/NVIDIA and Wen-mei W. Hwu, CS/EE 217 GPU Architecture and Programming Lecture 2: Introduction to CUDA C.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE 8823A GPU Architectures Module 2: Introduction.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Using.
1 The CUDA Programming Model © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL Spring 2010, University of Illinois, Urbana-Champaign.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana-Champaign 1 GPU.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
Fast and parallel implementation of Image Processing Algorithm using CUDA Technology On GPU Hardware Neha Patil Badrinath Roysam Department of Electrical.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana Champaign 1 ECE 498AL Programming Massively Parallel Processors.
Summer School s-Science with Many-core CPU/GPU Processors Lecture 2 Introduction to CUDA © David Kirk/NVIDIA and Wen-mei W. Hwu Braga, Portugal, June 14-18,
Computer Engg, IIT(BHU)
COMPUTER GRAPHICS CHAPTER 38 CS 482 – Fall 2017 GRAPHICS HARDWARE
GPU Programming with CUDA
Graphics on GPU © David Kirk/NVIDIA and Wen-mei W. Hwu,
Slides from “PMPP” book
© David Kirk/NVIDIA and Wen-mei W. Hwu,
© David Kirk/NVIDIA and Wen-mei W. Hwu,
© David Kirk/NVIDIA and Wen-mei W. Hwu,
Graphics Processing Unit
Parallel Computing 18: CUDA - I
Intro to GPU’s for Parallel Computing
CIS 6930: Chip Multiprocessor: GPU Architecture and Programming
CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming
Presentation transcript:

Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

The Graphics Coprocessor From the earliest VGA designs, the graphics unit has been designed as a special purpose processor, attached to the CPU using a high-speed I/O-type link. There are many CPU instructions that a GPU need not implement. This simplifies design of a GPU. A modern NVIDIA graphics system would include a high-performance dual-processor main CPU, a few GB of local memory, a high-end disk drive, and one or more graphics cards.

Chapter 7 — Multicores, Multiprocessors, and Clusters — 3 Graphics in the System

CS248 Lecture 14Kurt Akeley, Fall 2007 Why does graphics hardware exist? Special-purpose hardware tends to disappear over time n Lisp machines and CAD workstations of the 80s n CISC CPUs iAPX432 (circa 1982) Symbolics Lisp Machines (circa 1984)

Chapter 7 — Multicores, Multiprocessors, and Clusters — 5 GPU Architectures Processing is highly data-parallel GPUs are highly multithreaded Use thread switching to hide memory latency Less reliance on multi-level caches Graphics memory is wide and high-bandwidth Trend toward general purpose GPUs Heterogeneous CPU/GPU systems CPU for sequential code, GPU for parallel code Programming languages/APIs DirectX, OpenGL C for Graphics (Cg), High Level Shader Language (HLSL) Compute Unified Device Architecture (CUDA)

CS248 Lecture 14Kurt Akeley, Fall 2007 Why does graphics hardware exist? Graphics acceleration has been around for 40 years. Why do GPUs remain? Confluence of four things: n Performance differentiation n GPUs are much faster than CPUs at 3-D rendering tasks n Work-load sufficiency n The accelerated 3-D rendering tasks make up a significant portion of the overall processing (thus Amdahl’s law doesn’t limit the resulting performance increase). n Strong market demand n Customer demand for 3-D graphics performance is strong n Driven by the games market n Ubiquity n With the help of standardized APIs/architectures (OpenGL and Direct3D) GPUs have achieved ubiquity in the PC market n Inertia now works in favor of continued graphics hardware

GPU and GPGPU GPU is a graphics processing unit Originally driven for better computer graphics performance GPUs were originally meant as graphics accelerator chips to help the CPU General Purpose GPU (GPGPU) programming refers to the now common case where the GPU can be used to accelerate other (non- graphical) calculations 7

GPU Evolution (1) VGA – Video Graphics Array controllers – originally a memory controller and display generator connected to DRAM Variations in 1990’s to add more functionality Circa D accelerator functions: Triangle setup and rasterization Texture mapping and shading (decals) GPU term coined circa 2000 when typical graphics chip already did most of the standard graphics pipeline operations 8

GPU Evolution (2) Programmable processor (cores) replaced fixed dedicated logic GPUs became massively parallel processors Floating point and (recently) double precision Hundreds of cores, thousands of threads… Recently become programmable in eg C++ and variants like CUDA and OpenCL… 9

Origin of CUDA The Compute Unified Device Architecture, developed by NVIDIA Corporation, arose from a series of experiments in the early 2000’s. Graphics processors were becoming very fast. It was discovered that many numerical simulation problems could be forced into a form that could be adapted to execute on a graphics card. The difficulty was that the GPU had to be controlled using an API designed for graphics.

GPGPU and CUDA GPGPU stands for General Purpose computation on a Graphics Processing Unit. As mentioned above, this style used the traditional graphics API and graphics pipeline in a way that was only accidentally useful. The CUDA was developed intentionally to allow direct access to the graphics hardware, with programming in a variant of C/C++.

GPU Trends Implement OpenGL and DirectX New GPUs every months Coming together of parallel computing and graphics in a new and exciting way Heterogeneous computing: – Data parallelism on the GPU – More coarse-grained parallelism on the (multi- core) CPU 12

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 13 Parallel Computing on a GPU 8-series GPUs deliver 25 to 200+ GFLOPS on compiled parallel C applications –Available in laptops, desktops, and clusters GPU parallelism is doubling every year Programming model scales transparently Programmable in C with CUDA tools Multithreaded SPMD model uses application data parallelism and thread parallelism GeForce 8800 Tesla S870 Tesla D870

Chapter 7 — Multicores, Multiprocessors, and Clusters — 14 Example: NVIDIA Tesla Streaming multiprocessor 8 × Streaming processors

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana-Champaign 15 Host Vertex Control Vertex Cache VS/T&L Triangle Setup Raster Shader ROP FBI Texture Cache Frame Buffer Memory CPU GPU Host Interface A Fixed Function GPU Pipeline

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana-Champaign 16 3D Application or Game 3D API: OpenGL or Direct3D Programmable Vertex Processor Primitive Assembly Rasterization & Interpolation 3D API Commands Transformed Vertices Assembled Polygons, Lines, and Points GPU Command & Data Stream Programmable Fragment Processor Rasterized Pre-transformed Fragments Transformed Fragments Raster Operation s Framebuffer Pixel Updates GPU Front End Pre-transformed Vertices Vertex Index Stream Pixel Location Stream CPU – GPU Boundary CPU GPU An example of separate vertex processor and fragment processor in a programmable graphics pipeline Programmable Vertex and Pixel Processors

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana-Champaign 17 L2 FB SP L1 TF Thread Processor Vtx Thread Issue Setup / Rstr / ZCull Geom Thread IssuePixel Thread Issue Data Assembler Host SP L1 TF SP L1 TF SP L1 TF SP L1 TF SP L1 TF SP L1 TF SP L1 TF L2 FB L2 FB L2 FB L2 FB L2 FB Unified Graphics Pipeline

CS248 Lecture 14Kurt Akeley, Fall 2007 Multi-threading hides latency struct { float x,y,z,w; float r,g,b,a; } vertex; Instruction fetch and execute Memory reference (or resulting data dependency) Ready to Run Threads Blocked Threads Processor stalls if no threads are ready to run. Possible result of large thread context (too many live registers) Memory data available (dependency resolved)

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 19 Overview CUDA programming model – basic concepts and data types CUDA application programming interface - basic Simple examples to illustrate basic concepts and functionalities Performance features will be covered later

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 20 CUDA – C with no shader limitations! Integrated host+device app C program –Serial or modestly parallel parts in host C code –Highly parallel parts in device SPMD kernel C code Serial Code (host)‏... Parallel Kernel (device)‏ KernelA >>(args); Serial Code (host)‏ Parallel Kernel (device)‏ KernelB >>(args);

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 21 CUDA Devices and Threads A compute device –Is a coprocessor to the CPU or host –Has its own DRAM (device memory)‏ –Runs many threads in parallel –Is typically a GPU but can also be another type of parallel processing device Data-parallel portions of an application are expressed as device kernels which run on many threads Differences between GPU and CPU threads –GPU threads are extremely lightweight Very little creation overhead –GPU needs 1000s of threads for full efficiency Multi-core CPU needs only a few

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 22 Extended C Declspecs –global, device, shared, local, constant Keywords –threadIdx, blockIdx Intrinsics –__syncthreads Runtime API –Memory, symbol, execution management Function launch __device__ float filter[N]; __global__ void convolve (float *image) { __shared__ float region[M];... region[threadIdx] = image[i]; __syncthreads()... image[j] = result; } // Allocate GPU memory void *myimage = cudaMalloc(bytes) // 100 blocks, 10 threads per block convolve >> (myimage);

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 23 gcc / cl G80 SASS foo.sass OCG Extended C cudacc EDG C/C++ frontend Open64 Global Optimizer GPU Assembly foo.s CPU Host Code foo.cpp Integrated source (foo.cu) Mark Murphy, “NVIDIA’s Experience with Open64,”NVIDIA’s Experience with Open64 /Papers/101.doc

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 24 Arrays of Parallel Threads A CUDA kernel is executed by an array of threads –All threads run the same code (SPMD)‏ –Each thread has an ID that it uses to compute memory addresses and make control decisions … float x = input[threadID]; float y = func(x); output[threadID] = y; … threadID

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 25 … float x = input[threadID]; float y = func(x); output[threadID] = y; … threadID Thread Block 0 … … float x = input[threadID]; float y = func(x); output[threadID] = y; … Thread Block 1 … float x = input[threadID]; float y = func(x); output[threadID] = y; … Thread Block N - 1 Thread Blocks: Scalable Cooperation Divide monolithic thread array into multiple blocks –Threads within a block cooperate via shared memory, atomic operations and barrier synchronization –Threads in different blocks cannot cooperate

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 26 CUDA Memory Model Overview Global memory –Main means of communicating R/W Data between host and device –Contents visible to all threads –Long latency access We will focus on global memory for now –Constant and texture memory will come later Grid Global Memory Block (0, 0)‏ Shared Memory Thread (0, 0)‏ Registers Thread (1, 0)‏ Registers Block (1, 0)‏ Shared Memory Thread (0, 0)‏ Registers Thread (1, 0)‏ Registers Host

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 27 CUDA Device Memory Allocation cudaMalloc() Global Memory –Allocates object in the device Global Memory –Requires two parameters Address of a pointer to the allocated object Size of of allocated object cudaFree() –Frees object from device Global Memory Pointer to freed object Grid Global Memory Block (0, 0)‏ Shared Memory Thread (0, 0)‏ Registers Thread (1, 0)‏ Registers Block (1, 0)‏ Shared Memory Thread (0, 0)‏ Registers Thread (1, 0)‏ Registers Host

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 28 CUDA Host-Device Data Transfer cudaMemcpy()‏ –memory data transfer –Requires four parameters Pointer to destination Pointer to source Number of bytes copied Type of transfer –Host to Host –Host to Device –Device to Host –Device to Device Asynchronous transfer Grid Global Memory Block (0, 0)‏ Shared Memory Thread (0, 0)‏ Registers Thread (1, 0)‏ Registers Block (1, 0)‏ Shared Memory Thread (0, 0)‏ Registers Thread (1, 0)‏ Registers Host

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 29 CUDA Function Declarations host __host__ float HostFunc() hostdevice __global__ void KernelFunc() device __device__ float DeviceFunc() Only callable from the: Executed on the: __global__ defines a kernel function –Must return void __device__ and __host__ can be used together

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 30 CUDA Function Declarations (cont.)‏ __device__ functions cannot have their address taken For functions executed on the device: –No recursion –No static variable declarations inside the function –No variable number of arguments

Sample Problem: Matrix Multiply In this section, we take a simple problem from standard sequential computation and adapt it for optimal execution on a CUDA device. Let A, B, and C be N-by-N square matrices, with each index in the range [0, (N-1)]. The original code uses a triple loop, so its time complexity is O(N 3 ). Note the use of variable SUM to avoid multiple references to C[I][J].

The Sequential Code For I = 0 to (N – 1) Do For J = 0 to (N – 1) Do Sum = 0 ; For K = 0 to (N – 1) Do SUM = SUM + A[I][K]  B[K][J] ; End For C[I][J] = SUM ; End For

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 33 M 2,0 M 1,1 M 1,0 M 0,0 M 0,1 M 3,0 M 2,1 M 3,1 Memory Layout of a Matrix in C M 2,0 M 1,0 M 0,0 M 3,0 M 1,1 M 0,1 M 2,1 M 3,1 M 1,2 M 0,2 M 2,2 M 3,2 M 1,2 M 0,2 M 2,2 M 3,2 M 1,3 M 0,3 M 2,3 M 3,3 M 1,3 M 0,3 M 2,3 M 3,3 M

1D Representation of a 2D Array Assume a 2D array A[N][N] laid out in row major order, as above. The array can be accessed either as a 2D array or as a 1D array. The element A[I][J] is referenced in one dimension as A[I  N + J]. This transformation is exactly what a modern compiler will do in handling the array access.

Multiplication with 1D Arrays For I = 0 to (N – 1) Do For J = 0 to (N – 1) Do Sum = 0 ; For K = 0 to (N – 1) Do SUM = SUM + A[I  N + K]  B[K  N + J] ; End For C[I  N + J] = SUM ; End For

Efficiency in Computing the Index Consider the statement SUM = SUM + A[I  N + K]  B[K  N + J] This involves two multiplications to generate the indices into the arrays A and B. In general, we want to avoid multiplication when there is a simpler approach that is obvious and easy to understand. We now evolve the more efficient algorithm.

Modifying the Index Calculation This modification affects only the inner loop of the example code. The original code is For K = 0 to (N – 1) Do SUM = SUM + A[I  N + K]  B[K  N + J] ; End For We now modify that code as follows For K = 0 to (N – 1) Do L = I  N + K ; M = K  N + J ; SUM = SUM + A[L]  B[M] ; End For

Sequence of the Indices Here we watch L and M as K is incremented. For K = 0 to (N – 1) Do L = I  N + K ; M = K  N + J ; SUM = SUM + A[L]  B[M] ; End For For K = 0L = I  N M = J For K = 1L = I  N + 1M = J + N For K = 2L = I  N + 2M = J + 2  N For K = 3L = I  N + 3M = J + 3  N

The Optimized Sequential Code For I = 0 to (N – 1) Do For J = 0 to (N – 1) Do Sum = 0 ; L = I  N ; M = J ; For K = 0 to (N – 1) Do SUM = SUM + A[L]  B[M] ; L = L + 1 ; M = M + N ; End For C[I  N + J] = SUM ; End For

A Square Array of Processors Processor P[I][J] handles array element C[I][J] Sum = 0 ; L = I  N ; M = J ; INJ = L + M ; // This is I  N + J. For K = 0 to (N – 1) Do SUM = SUM + A[L]  B[M] ; L = L + 1 ; M = M + N ; End For C[INJ] = SUM ; // This is C[I][J]

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 41 Block IDs and Thread IDs Each thread uses IDs to decide what data to work on –Block ID: 1D or 2D –Thread ID: 1D, 2D, or 3D Simplifies memory addressing when processing multidimensional data –Image processing –Solving PDEs on volumes –…

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 42 Revised Matrix Multiplication Kernel using Multiple Blocks __global__ void MatrixMulKernel(float* Md, float* Nd, float* Pd, int Width) { // Calculate the row index of the Pd element and M int Row = blockIdx.y*TILE_WIDTH + threadIdx.y; // Calculate the column idenx of Pd and N int Col = blockIdx.x*TILE_WIDTH + threadIdx.x; float Pvalue = 0; // each thread computes one element of the block sub-matrix for (int k = 0; k < Width; ++k) Pvalue += Md[Row*Width+k] * Nd[k*Width+Col]; Pd[Row*Width+Col] = Pvalue; }