CUDA More on Blocks/Threads. 2 Debugging Using the Device Emulation Mode An executable compiled in device emulation mode ( nvcc -deviceemu ) runs completely.

Slides:



Advertisements
Similar presentations
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
Advertisements

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 More on Performance Considerations.
Taking CUDA to Ludicrous Speed Getting Righteous Performance from your GPU 1.
Intermediate GPGPU Programming in CUDA
Speed, Accurate and Efficient way to identify the DNA.
Complete Unified Device Architecture A Highly Scalable Parallel Programming Framework Submitted in partial fulfillment of the requirements for the Maryland.
Introduction to CUDA (2 of 2) Patrick Cozzi University of Pennsylvania CIS Fall 2012.
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 28, 2011 GPUMemories.ppt GPU Memories These notes will introduce: The basic memory hierarchy.
1 ITCS 5/4145 Parallel computing, B. Wilkinson, April 11, CUDAMultiDimBlocks.ppt CUDA Grids, Blocks, and Threads These notes will introduce: One.
GPU programming: CUDA Acknowledgement: the lecture materials are based on the materials in NVIDIA teaching center CUDA course materials, including materials.
1 Threading Hardware in G80. 2 Sources Slides by ECE 498 AL : Programming Massively Parallel Processors : Wen-Mei Hwu John Nickolls, NVIDIA.
Programming with CUDA, WS09 Waqar Saleem, Jens Müller Programming with CUDA and Parallel Algorithms Waqar Saleem Jens Müller.
CUDA Slides by David Kirk.
1 CUDA Threads. © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 2 Block IDs and Thread IDs Each thread.
CUDA Programming Lei Zhou, Yafeng Yin, Yanzhi Ren, Hong Man, Yingying Chen.
Programming with CUDA WS 08/09 Lecture 5 Thu, 6 Nov, 2008.
CUDA Grids, Blocks, and Threads
Programming with CUDA WS 08/09 Lecture 3 Thu, 30 Oct, 2008.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
An Introduction to Programming with CUDA Paul Richmond
Nvidia CUDA Programming Basics Xiaoming Li Department of Electrical and Computer Engineering University of Delaware.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 7: Threading Hardware in G80.
More CUDA Examples. Different Levels of parallelism Thread parallelism – each thread is an independent thread of execution Data parallelism – across threads.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Memory Hardware in G80.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 3: The CUDA Memory Model.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors CUDA Threads.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core processors for Science and Engineering.
+ CUDA Antonyus Pyetro do Amaral Ferreira. + The problem The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now.
CIS 565 Fall 2011 Qing Sun
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395 Winter 2014 Lecture 17 Introduction to Accelerator.
1 ECE 8823A GPU Architectures Module 3: CUDA Execution Model © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al, University of Illinois,
1. Could I be getting better performance? Probably a little bit. Most of the performance is handled in HW How much better? If you compile –O3, you can.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow/ Thread Execution.
CS/EE 217 GPU Architecture and Parallel Programming Lecture 3: Kernel-Based Data Parallel Execution Model © David Kirk/NVIDIA and Wen-mei Hwu,
CUDA - 2.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Threading Hardware in G80.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 4: CUDA Threads – Part 2.
CUDA Parallel Execution Model with Fermi Updates © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al, University of Illinois, Urbana-Champaign.
ECE408/CS483 Applied Parallel Programming Lecture 4: Kernel-Based Data Parallel Execution Model © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al,
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
ME964 High Performance Computing for Engineering Applications Execution Model and Its Hardware Support Sept. 25, 2008.
CUDA All material not from online sources/textbook copyright © Travis Desell, 2012.
Parallel Programming Basics  Things we need to consider:  Control  Synchronization  Communication  Parallel programming languages offer different.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads.
ECE408/CS483 Applied Parallel Programming Lecture 4: Kernel-Based Data Parallel Execution Model © David Kirk/NVIDIA and Wen-mei Hwu, , SSL 2014.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 4: The CUDA Memory Model (Cont.)
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
GPGPU Programming with CUDA Leandro Avila - University of Northern Iowa Mentor: Dr. Paul Gray Computer Science Department University of Northern Iowa.
Single Instruction Multiple Threads
Computer Engg, IIT(BHU)
CS427 Multicore Architecture and Parallel Computing
Basic CUDA Programming
ECE 498AL Spring 2010 Lectures 8: Threading & Memory Hardware in G80
CUDA Parallelism Model
© David Kirk/NVIDIA and Wen-mei W. Hwu,
Programming Massively Parallel Processors Performance Considerations
ECE498AL Spring 2010 Lecture 4: CUDA Threads – Part 2
ECE 8823A GPU Architectures Module 3: CUDA Execution Model -I
Mattan Erez The University of Texas at Austin
© David Kirk/NVIDIA and Wen-mei W. Hwu,
6- General Purpose GPU Programming
CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming
CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming
Presentation transcript:

CUDA More on Blocks/Threads

2 Debugging Using the Device Emulation Mode An executable compiled in device emulation mode ( nvcc -deviceemu ) runs completely on the host using the CUDA runtime – No need of any device and CUDA driver – Each device thread is emulated with a host thread Running in device emulation mode, one can: – Use host native debug support (breakpoints, inspection, etc.) Compile with –g –G debug with: cuda-gdb – Access any device-specific data from host code and vice-versa – Call any host function from device code (e.g. printf ) and vice- versa – Detect deadlock situations caused by improper usage of __syncthreads X

3 Device Emulation Mode Pitfalls Emulated device threads execute sequentially, so simultaneous accesses of the same memory location by multiple threads could produce different results. Dereferencing device pointers on the host or host pointers on the device can produce correct results in device emulation mode, but will generate an error in device execution mode

4 Floating Point Results of floating-point computations will slightly differ because of: – Different compiler outputs, instruction sets – Use of extended precision for intermediate results There are various options to force strict single precision on the host

CUDA Thread Block All threads in a block execute the same kernel program (SPMD) Programmer declares block: – Block size 1 to 512 concurrent threads – Block shape 1D, 2D, or 3D – Block dimensions in threads Threads have thread id numbers within block – Thread program uses thread id to select work and address shared data Threads in the same block share data and synchronize while doing their share of the work Threads in different blocks cannot cooperate – Each block can execute in any order relative to other blocs! – End kernel and go back to host to enforce order CUDA Thread Block Thread Id #: … m Thread program Courtesy: John Nickolls, NVIDIA

6 G80 CUDA mode – A Review Processors execute computing threads New operating mode/HW interface for computing

Transparent Scalability Hardware is free to assigns blocks to any processor at any time – A kernel scales across any number of parallel processors Device Block 0Block 1 Block 2Block 3 Block 4Block 5 Block 6Block 7 Kernel grid Block 0Block 1 Block 2Block 3 Block 4Block 5 Block 6Block 7 Device Block 0Block 1Block 2Block 3Block 4Block 5Block 6Block 7 Each block can execute in any order relative to other blocks. time

8 G80 Example: Executing Thread Blocks Threads are assigned to Streaming Multiprocessors in block granularity – Up to 8 blocks to each SM as resource allows – SM in G80 can take up to 768 threads Could be 256 (threads/block) * 3 blocks Or 128 (threads/block) * 6 blocks, etc. Threads run concurrently – SM maintains thread/block id #s – SM manages/schedules thread execution t0 t1 t2 … tm Blocks SP Shared Memory MT IU SP Shared Memory MT IU t0 t1 t2 … tm Blocks SM 1SM 0

G80 Example: Thread Scheduling Each Block is executed as 32-thread Warps –An implementation decision, not part of the CUDA programming model –Warps are scheduling units in SM If 3 blocks are assigned to an SM and each block has 256 threads, how many Warps are there in an SM? –Each Block is divided into 256/32 = 8 Warps –There are 8 * 3 = 24 Warps … t0 t1 t2 … t31 … … … Block 1 WarpsBlock 2 Warps SP SFU SP SFU Instruction Fetch/Dispatch Instruction L1 Streaming Multiprocessor Shared Memory … t0 t1 t2 … t31 … Block 1 Warps

G80 Example: Thread Scheduling (Cont.) SM implements zero-overhead warp scheduling – At any time, only one of the warps is executed by SM – Warps whose next instruction has its operands ready for consumption are eligible for execution – Eligible Warps are selected for execution on a prioritized scheduling policy – All threads in a warp execute the same instruction when selected

G80 Block Granularity Considerations For Matrix Multiplication using multiple blocks, should I use 8X8, 16X16 or 32X32 threads per block? – For 8X8, we have 64 threads per Block. Since each SM can take up to 768 threads, there are 12 Blocks. However, each SM can only take up to 8 Blocks, only 512 threads will go into each SM! – For 16X16, we have 256 threads per Block. Since each SM can take up to 768 threads, it can take up to 3 Blocks and achieve full capacity unless other resource considerations overrule. – For 32X32, we have 1024 threads per Block. Not even one can fit into an SM! Our earlier Julia fractal implementation not as good as it could have been; why not?

Sub-Blocks and Threads P 1,0 P 0,0 P 0,1 P 2,0 P 3,0 P 1,1 P 0,2 P 2,2 P 3,2 P 1,2 P 3,1 P 2,1 P 0,3 P 2,3 P 3,3 P 1,3 Block(0,0)Block(1,0) Block(1,1)Block(0,1) TILE_WIDTH = 2 Mapping to row and column Row in P = blockIdx.y * TILE_WIDTH + threadIdx.y Column in P = blockIdx.x * TILE_WIDTH + threadIdx.x Then map to 1D array P[Row * WIDTH + Column] = Value blockDim.x

Example Matrix Mul program: #define DIM 4 __global__ void MatrixGenerate(int* M, int* N, int* P, int width) { int row = blockIdx.y * blockDim.x + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; P[row * width + col] = (row + col); } dim3 blocks(DIM/2, DIM/2); dim3 threads(DIM/2, DIM/2); MatrixGenerate >>(dev_m, dev_n, dev_p, DIM);

Improved Julia Fractal Change block/thread size to better utilize thread support per SM #define DIM 3008 // 16*188 __global__ void kernel(char *ptr) { int row = blockIdx.y * blockDim.x + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; int offset = col + row * DIM; ptr[offset] = julia(row,col); } dim3 blocks(188,188); dim3 threads(16,16); kernel >>(dev_charmap);

Long Vectors Using 1 block, limited to 512 threads Maximum of blocks If you want to operate on something longer than even if its 1D then we have to combine blocks and threads Thread 0Thread 1Thread 2Thread 3Thread 4 Thread 0Thread 1Thread 2Thread 3Thread 4 Thread 0Thread 1Thread 2Thread 3Thread 4 Thread 0Thread 1Thread 2Thread 3Thread 4 Block 0 Block 1 Block 2 Block 3 … Block D array index = (blockIdx.x * blockDim.x) + threadIdx.x= 0 to 32000*5+4 = 160,004

Arbitrarily Long Vectors The limit is 512 threads per block, so there is a failure if the vector is of size N and N/512 > – N > 65535*512 = 33,553,920 elements – Pretty big but we could have the capacity for up to 4GB Solution – Have to assign range of data values to each thread instead of each thread only operating on one value

Some Additional API Features

Language Extensions: Built-in Variables dim3 gridDim; – Dimensions of the grid in blocks ( gridDim.z unused) dim3 blockDim; – Dimensions of the block in threads dim3 blockIdx; – Block index within the grid dim3 threadIdx; – Thread index within the block

Common Runtime Component: Mathematical Functions pow, sqrt, cbrt, hypot exp, exp2, expm1 log, log2, log10, log1p sin, cos, tan, asin, acos, atan, atan2 sinh, cosh, tanh, asinh, acosh, atanh ceil, floor, trunc, round Etc. – When executed on the host, a given function uses the C runtime implementation if available – These functions are only supported for scalar types, not vector types

Device Runtime Component: Mathematical Functions Some mathematical functions (e.g. sin(x) ) have a less accurate, but faster device-only version (e.g. __sin(x) ) – __pow – __log, __log2, __log10 – __exp – __sin, __cos, __tan

Host Runtime Component Provides functions to deal with: – Device management (including multi-device systems) – Memory management – Error handling Initializes the first time a runtime function is called A host thread can invoke device code on only one device – Multiple host threads required to run on multiple devices

Device Runtime Component: Synchronization Function void __syncthreads(); Synchronizes all threads in a block Once all threads have reached this point, execution resumes normally Used to avoid RAW / WAR / WAW hazards when accessing shared or global memory Allowed in conditional constructs only if the conditional is uniform across the entire thread block