Download presentation
Presentation is loading. Please wait.
Published byBeatrix Parker Modified over 9 years ago
1
Lecture 19 Revisting Strides, CUDA Threads… Topics Strides through memory Practical Performance considerationsReadings November 7, 2012 CSCE 513 Advanced Computer Architecture
2
– 2 – CSCE 513 Fall 2012 Overview Last Time Intro to CUDA/GPU programming Readings for today Stanford – (Itunes)http://code.google.com/p/stanford-cs193g-sp2010/ http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule Book (online) David Kirk/NVIDIA and Wen-mei W. Hwu, 2007- 2009 http://courses.engr.illinois.edu/ece498/al/Syllabus.htmlhttp://courses.engr.illinois.edu/ece498/al/Syllabus.html Chapters 1-3New OpenMP Examples – SC 2008 (link emailed Tuesday) Nvidia CUDA - example
3
– 3 – CSCE 513 Fall 2012 Nvidia NVIDIA Developer Zone - http://developer.nvidia.com/ http://developer.nvidia.com/ http://developer.nvidia.com/cuda-toolkit-41 CUDA Toolkit Downloads C/C++ compiler, CUDA-GDB, Visual Profiler, CUDA Memcheck, GPU-accelerated libraries, Other tools & DocumentationCUDA Toolkit Downloads C/C++ compiler, CUDA-GDB, Visual Profiler, CUDA Memcheck, GPU-accelerated libraries, Other tools & Documentation Developer Drivers Downloads GPU Computing SDK Downloads
4
– 4 – CSCE 513 Fall 2012 Stanford CS 193G http://code.google.com/p/stanford-cs193g- sp2010/wiki/TutorialPrerequisites http://code.google.com/p/stanford-cs193g- sp2010/wiki/TutorialPrerequisites Vincent Natol “Kudos for CUDA,” HPC Wire (2010) Patterson, David A.; Hennessy, John L. (2011-08-01). Computer Architecture: A Quantitative Approach (The Morgan Kaufmann Series in Computer Architecture and Design) (Kindle Locations 7530- 7532). Elsevier Science (reference). Kindle Edition.
5
– 5 – CSCE 513 Fall 2012 Lessons from Graphics Pipeline Throughput is paramount must paint every pixel within frame time scalability Create, run, & retire lots of threads very rapidly measured 14.8 Gthread/s on increment() kernel Use multithreading to hide latency 1 stalled thread is OK if 100 are ready to run
6
– 6 – CSCE 513 Fall 2012 Why is this different from a CPU? Different goals produce different designs GPU assumes work load is highly parallel CPU must be good at everything, parallel or not CPU: minimize latency experienced by 1 thread big on-chip caches sophisticated control logic GPU: maximize throughput of all threads # threads in flight limited by resources => lots of resources (registers, bandwidth, etc.) multithreading can hide latency => skip the big caches share control logic across many threads
7
– 7 – CSCE 513 Fall 2012 NVIDIA GPU Architecture Fermi GF100 DRAM I/F HOST I/F Giga Thread DRAM I/F L2
8
– 8 – CSCE 513 Fall 2012 SM (Streaming Multiprocessor) Register File Scheduler Dispatch Scheduler Dispatch Load/Store Units x 16 Special Func Units x 4 Interconnect Network 64K Configurable Cache/Shared Mem Uniform Cache Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Instruction Cache 32 CUDA Cores per SM (512 total) Each core executes identical instruction or sleeps 24 active warps limit 8x peak FP64 performance 50% of peak FP32 performance Direct load/store to memory Usual linear sequence of bytes High bandwidth (Hundreds GB/sec) 64KB of fast, on-chip RAM Software or hardware-managed Shared amongst CUDA cores Enables thread communication
9
– 9 – CSCE 513 Fall 2012 Key Architectural Ideas SIMT ( Single Instruction Multiple Thread ) execution threads run in groups of 32 called warps threads in a warp share instruction unit (IU) HW automatically handles divergence Hardware multithreading HW resource allocation & thread scheduling HW relies on threads to hide latency Threads have all resources needed to run any warp not waiting for something can run context switching is (basically) free Register File Scheduler Dispatch Scheduler Dispatch Load/Store Units x 16 Special Func Units x 4 Interconnect Network 64K Configurable Cache/Shared Mem Uniform Cache Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Instruction Cache
10
– 10 – CSCE 513 Fall 2012 C for CUDA Philosophy: provide minimal set of extensions necessary to expose power Function qualifiers: __global__ void my_kernel() { } __device__ float my_device_func() { } Variable qualifiers: __constant__ float my_constant_array[32]; __shared__ float my_shared_array[32]; Execution configuration: dim3 grid_dim(100, 50); // 5000 thread blocks dim3 block_dim(4, 8, 8); // 256 threads per block my_kernel >> (...); // Launch kernel Built-in variables and functions valid in device code: dim3 gridDim; // Grid dimension dim3 blockDim; // Block dimension dim3 blockIdx; // Block index dim3 threadIdx; // Thread index void __syncthreads(); // Thread synchronization
11
– 11 – CSCE 513 Fall 2012 Example: vector_addition // compute vector sum c = a + b // each thread performs one pair-wise addition __global__ void vector_add(float* A, float* B, float* C) { int i = threadIdx.x + blockDim.x * blockIdx.x; C[i] = A[i] + B[i]; C[i] = A[i] + B[i];} int main() { // elided initialization code // elided initialization code...... // Run N/256 blocks of 256 threads each // Run N/256 blocks of 256 threads each vector_add >>(d_A, d_B, d_C); vector_add >>(d_A, d_B, d_C);} Device Code
12
– 12 – CSCE 513 Fall 2012 Example: vector_addition // compute vector sum c = a + b // each thread performs one pair-wise addition __global__ void vector_add(float* A, float* B, float* C) { int i = threadIdx.x + blockDim.x * blockIdx.x; int i = threadIdx.x + blockDim.x * blockIdx.x; C[i] = A[i] + B[i]; C[i] = A[i] + B[i];} int main() { // elided initialization code // elided initialization code...... // launch N/256 blocks of 256 threads each // launch N/256 blocks of 256 threads each vector_add >>(d_A, d_B, d_C); vector_add >>(d_A, d_B, d_C);} Host Code
13
– 13 – CSCE 513 Fall 2012 Example: Initialization code for vector_addition // allocate and initialize host (CPU) memory float *h_A = …, *h_B = …; // allocate device (GPU) memory float *d_A, *d_B, *d_C; cudaMalloc( (void**) &d_A, N * sizeof(float)); cudaMalloc( (void**) &d_B, N * sizeof(float)); cudaMalloc( (void**) &d_C, N * sizeof(float)); // copy host memory to device cudaMemcpy( d_A, h_A, N * sizeof(float), cudaMemcpyHostToDevice) ); cudaMemcpy( d_B, h_B, N * sizeof(float), cudaMemcpyHostToDevice) ); // launch N/256 blocks of 256 threads each vector_add >>(d_A, d_B, d_C);
14
– 14 – CSCE 513 Fall 2012 CUDA Programming Model Parallel code (kernel) is launched and executed on a device by many threads Launches are hierarchical Threads are grouped into blocks Blocks are grouped into grids Familiar serial code is written for a thread Each thread is free to execute a unique code path Built-in thread and block ID variables
15
– 15 – CSCE 513 Fall 2012 DAXPY example in text
16
– 16 – CSCE 513 Fall 2012
17
– 17 – CSCE 513 Fall 2012 High Level View SMEM Global Memory CPU Chipset PCIe
18
– 18 – CSCE 513 Fall 2012 Blocks of threads run on an SM Thread Memory Threadblock Per-block Shared Memory SMEM Streaming ProcessorStreaming Multiprocessor Register s Memory
19
– 19 – CSCE 513 Fall 2012 Whole grid runs on GPU Many blocks of threads... SMEM Global Memory
20
– 20 – CSCE 513 Fall 2012 Thread Hierarchy Threads launched for a parallel section are partitioned into thread blocks Grid = all blocks for a given launch Thread block is a group of threads that can: Synchronize their execution Communicate via shared memory
21
– 21 – CSCE 513 Fall 2012 Memory Model Kernel 0... Per-device Global Memory... Kernel 1 Sequential Kernels
22
– 22 – CSCE 513 Fall 2012 IDs and Dimensions Threads: 3D IDs, unique within a blockBlocks: 2D IDs, unique within a grid Dimensions set at launch Can be unique for each grid Built-in variables: threadIdx, blockIdx blockDim, gridDim Device Grid 1 Block (0, 0) Block (1, 0) Block (2, 0) Block (0, 1) Block (1, 1) Block (2, 1) Block (1, 1) Thread (0, 1) Thread (1, 1) Thread (2, 1) Thread (3, 1) Thread (4, 1) Thread (0, 2) Thread (1, 2) Thread (2, 2) Thread (3, 2) Thread (4, 2) Thread (0, 0) Thread (1, 0) Thread (2, 0) Thread (3, 0) Thread (4, 0)
23
– 23 – CSCE 513 Fall 2012 __global__ void kernel( int *a, int dimx, int dimy ) { int ix = blockIdx.x*blockDim.x + threadIdx.x; int iy = blockIdx.y*blockDim.y + threadIdx.y; int idx = iy*dimx + ix; a[idx] = a[idx]+1; } Kernel with 2D Indexing
24
– 24 – CSCE 513 Fall 2012 int main() { int dimx = 16; int dimy = 16; int num_bytes = dimx*dimy*sizeof(int); int *d_a=0, *h_a=0; // device and host pointers h_a = (int*)malloc(num_bytes); cudaMalloc( (void**)&d_a, num_bytes ); if( 0==h_a || 0==d_a ) { printf("couldn't allocate memory\n"); return 1; } cudaMemset( d_a, 0, num_bytes ); dim3 grid, block; block.x = 4; block.y = 4; grid.x = dimx / block.x; grid.y = dimy / block.y; kernel >>( d_a, dimx, dimy ); cudaMemcpy( h_a, d_a, num_bytes, cudaMemcpyDeviceToHost ); for(int row=0; row<dimy; row++) { for(int col=0; col<dimx; col++) printf("%d ", h_a[row*dimx+col] ); printf("\n"); } free( h_a ); cudaFree( d_a ); return 0; } __global__ void kernel( int *a, int dimx, int dimy ) { int ix = blockIdx.x*blockDim.x + threadIdx.x; int iy = blockIdx.y*blockDim.y + threadIdx.y; int idx = iy*dimx + ix; a[idx] = a[idx]+1; }
25
– 25 – CSCE 513 Fall 2012
26
– 26 – CSCE 513 Fall 2012 The Problem How do you do global communication? Finish a grid and start a new one http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
27
– 27 – CSCE 513 Fall 2012 Global Communication Finish a kernel and start a new one All writes from all threads complete before a kernel finishes step1 >>(...); // The system ensures that all // writes from step1 complete. step2 >>(...); http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
28
– 28 – CSCE 513 Fall 2012 Global Communication Would need to decompose kernels into before and after parts http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
29
– 29 – CSCE 513 Fall 2012 Race Conditions Or, write to a predefined memory location Race condition! Updates can be lost http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
30
– 30 – CSCE 513 Fall 2012 Race Conditions threadId:0 threadId:1917 // vector[0] was equal to 0 vector[0] += 5; vector[0] += 1;... a = vector[0]; What is the value of a in thread 0? What is the value of a in thread 1917? http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
31
– 31 – CSCE 513 Fall 2012 Race Conditions Thread 0 could have finished execution before 1917 started Or the other way around Or both are executing at the same time Answer: not defined by the programming model, can be arbitrary http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
32
– 32 – CSCE 513 Fall 2012 Atomics CUDA provides atomic operations to deal with this problem http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
33
– 33 – CSCE 513 Fall 2012 Atomics An atomic operation guarantees that only a single thread has access to a piece of memory while an operation completes The name atomic comes from the fact that it is uninterruptable No dropped data, but ordering is still arbitrary Different types of atomic instructions atomic{Add, Sub, Exch, Min, Max, Inc, Dec, CAS, And, Or, Xor} More types in fermi http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
34
– 34 – CSCE 513 Fall 2012 Example: Histogram // Determine frequency of colors in a picture // colors have already been converted into ints // Each thread looks at one pixel and increments // a counter atomically __global__ void histogram(int* color, int* buckets) int* buckets){ int i = threadIdx.x int i = threadIdx.x + blockDim.x * blockIdx.x; + blockDim.x * blockIdx.x; int c = colors[i]; int c = colors[i]; atomicAdd(&buckets[c], 1); atomicAdd(&buckets[c], 1);} http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
35
– 35 – CSCE 513 Fall 2012 Example: Workqueue // For algorithms where the amount of work per item // is highly non-uniform, it often makes sense for // to continuously grab work from a queue __global__ void workq(int* work_q, int* q_counter, int* output, int queue_max) int* output, int queue_max){ int i = threadIdx.x int i = threadIdx.x + blockDim.x * blockIdx.x; + blockDim.x * blockIdx.x; int q_index = int q_index = atomicInc(q_counter, queue_max); atomicInc(q_counter, queue_max); int result = do_work(work_q[q_index]); int result = do_work(work_q[q_index]); output[i] = result; output[i] = result;}
36
– 36 – CSCE 513 Fall 2012 Atomics Atomics are slower than normal load/store You can have the whole machine queuing on a single location in memory Atomics unavailable on G80! http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
37
– 37 – CSCE 513 Fall 2012 Example: Global Min/Max (Naive) // If you require the maximum across all threads // in a grid, you could do it with a single global // maximum value, but it will be VERY slow __global__ void global_max(int* values, int* gl_max) { int i = threadIdx.x int i = threadIdx.x + blockDim.x * blockIdx.x; + blockDim.x * blockIdx.x; int val = values[i]; int val = values[i]; atomicMax(gl_max,val); atomicMax(gl_max,val);} http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
38
– 38 – CSCE 513 Fall 2012 Example: Global Min/Max (Better) // introduce intermediate maximum results, so that // most threads do not try to update the global max __global__ void global_max(int* values, int* max, int *regional_maxes, int *regional_maxes, int num_regions) int num_regions){ // i and val as before … // i and val as before … int region = i % num_regions; int region = i % num_regions; if(atomicMax(®_max[region],val) < val) if(atomicMax(®_max[region],val) < val) { atomicMax(max,val); atomicMax(max,val); }} http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
39
– 39 – CSCE 513 Fall 2012 Global Min/Max Single value causes serial bottleneck Create hierarchy of values for more parallelism Performance will still be slow, so use judiciously See next lecture for even better version! http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
40
– 40 – CSCE 513 Fall 2012 Summary Can’t use normal load/store for inter-thread communication because of race conditions Use atomic instructions for sparse and/or unpredictable global communication See next lectures for shared memory and scan for other communication patterns Decompose data (very limited use of single global sum/max/min/etc.) for more parallelism http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
41
– 41 – CSCE 513 Fall 2012 How an SM executes threads Overview of how a Stream Multiprocessor works SIMT Execution Divergence http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
42
– 42 – CSCE 513 Fall 2012 Scheduling Blocks onto SMs Thread Block 5 Thread Block 27 Thread Block 61 Streaming Multiprocessor Thread Block 2001 HW Schedules thread blocks onto available SMs No guarantee of ordering among thread blocks HW will schedule thread blocks as soon as a previous thread block finishes http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
43
– 43 – CSCE 513 Fall 2012 Warps ALU Contro l ALU Contro l ALU Contro l ALU Contro l ALU Contro l ALU Contro l ALU Contro l ALU A warp = 32 threads launched together Usually, execute together as well http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
44
– 44 – CSCE 513 Fall 2012 Mapping of Thread Blocks Each thread block is mapped to one or more warps The hardware schedules each warp independently Thread Block N (128 threads) TB N W1 TB N W2 TB N W3 TB N W4 http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
45
– 45 – CSCE 513 Fall 2012 45 Thread Scheduling Example SM implements zero-overhead warp scheduling At any time, only one of the warps is executed by SM * Warps whose next instruction has its inputs ready for consumption are eligible for execution Eligible Warps are selected for execution on a prioritized scheduling policy All threads in a warp execute the same instruction when selected http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
46
– 46 – CSCE 513 Fall 2012 Control Flow Divergence What happens if you have the following code? if(foo(threadIdx.x)) { do_A(); } else { do_B(); }
47
– 47 – CSCE 513 Fall 2012 Control Flow Divergence From Fung et al. MICRO ‘07 Branch Path A Path B Branch Path A Path B http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
48
– 48 – CSCE 513 Fall 2012 Control Flow Divergence Nested branches are handled as well if(foo(threadIdx.x)) { if(bar(threadIdx.x)) do_A(); else do_B(); } else do_C();
49
– 49 – CSCE 513 Fall 2012 Control Flow Divergence Branch Path A Path C Branch Path B
50
– 50 – CSCE 513 Fall 2012 Control Flow Divergence You don’t have to worry about divergence for correctness (*) You might have to think about it for performance Depends on your branch conditions http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
51
– 51 – CSCE 513 Fall 2012 Control Flow Divergence Performance drops off with the degree of divergence switch(threadIdx.x % N) { case 0:... case 1:... }
52
– 52 – CSCE 513 Fall 2012 Divergence http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
53
– 53 – CSCE 513 Fall 2012 Atomics atomicAdd returns the previous value at a certain address Useful for grabbing variable amounts of data from a list http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
54
– 54 – CSCE 513 Fall 2012 Compare and Swap int compare_and_swap(int* register, int oldval, int newval) int oldval, int newval){ int old_reg_val = *register; if(old_reg_val == oldval) *register = newval; return old_reg_val; } http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
55
– 55 – CSCE 513 Fall 2012 Compare and Swap Most general type of atomic Can emulate all others with CAS http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
56
– 56 – CSCE 513 Fall 2012 Locks Use very judiciously Always include a max_iter in your spinloop! Decompose your data and your locks http://code.google.com/p/stanford-cs193g-sp2010/wiki/ClassSchedule
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.