L8: Control Flow CS6963. Administrative Next assignment on the website – Description at end of class – Due Wednesday, Feb. 17, 5PM (done?) – Use handin.

Slides:



Advertisements
Similar presentations
More on threads, shared memory, synchronization
Advertisements

CS 179: GPU Programming Lecture 7. Week 3 Goals: – More involved GPU-accelerable algorithms Relevant hardware quirks – CUDA libraries.
L8: Writing Correct Programs, cont. and Control Flow L8: Control Flow.
L15: Review for Midterm. Administrative Project proposals due today at 5PM (hard deadline) – handin cs6963 prop March 31, MIDTERM in class L15: Review.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Structuring Parallel Algorithms.
L9: Control Flow CS6963. Administrative Project proposals Due 5PM, Friday, March 13 (hard deadline) MPM Sequential code and information posted on website.
L8: Memory Hierarchy Optimization, Bandwidth CS6963.
L7: Memory Hierarchy Optimization IV, Bandwidth Optimization and Case Studies CS6963.
L6: Memory Hierarchy Optimization IV, Bandwidth Optimization CS6963.
Prepared 10/11/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
© David Kirk/NVIDIA and Wen-mei W. Hwu, , SSL 2014, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30 – July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 11 Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Memory Hardware in G80.
CS6963 L18: Introduction to CUDA November 5, 2009.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
ME964 High Performance Computing for Engineering Applications “A computer will do what you tell it to do, but that may be much different from what you.
L19: Advanced CUDA Issues November 10, Administrative CLASS CANCELLED, TUESDAY, NOVEMBER 17 Guest Lecture, November 19, Ganesh Gopalakrishnan Thursday,
L8: Writing Correct Programs, cont. and Control Flow L8: Control Flow CS6235.
CUDA Performance Considerations (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 12 Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 12: Application Lessons When the tires.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign ECE408 / CS483 Applied Parallel Programming.
1 ECE 8823A GPU Architectures Module 5: Execution and Resources - I.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow/ Thread Execution.
CUDA Performance Patrick Cozzi University of Pennsylvania CIS Fall
CUDA - 2.
CUDA Performance Considerations (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
Optimizing Parallel Reduction in CUDA Mark Harris NVIDIA Developer Technology.
CUDA Performance Considerations (2 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
L17: CUDA, cont. Execution Model and Memory Hierarchy November 6, 2012.
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
© David Kirk/NVIDIA, Wen-mei W. Hwu, and John Stratton, ECE 498AL, University of Illinois, Urbana-Champaign 1 CUDA Lecture 7: Reductions and.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core Processors for Science and Engineering.
© David Kirk/NVIDIA and Wen-mei W. Hwu, University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 11 Parallel Computation.
Efficient Parallel CKY Parsing on GPUs Youngmin Yi (University of Seoul) Chao-Yue Lai (UC Berkeley) Slav Petrov (Google Research) Kurt Keutzer (UC Berkeley)
© David Kirk/NVIDIA and Wen-mei W. Hwu University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 15: Atomic Operations.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 13: Application Lessons When the tires.
L8: Control Flow CS6235. Administrative Next assignment available – Goals of assignment: – simple memory hierarchy management – block-thread decomposition.
© David Kirk/NVIDIA and Wen-mei W. Hwu University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 10 Reduction Trees.
CS/EE 217 GPU Architecture and Parallel Programming Midterm Review
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 408, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 408, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Performance.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
CS6963 L3: Data Partitioning and Memory Organization, Synchronization January 21, L3: Synchronization, Data & Memory.
L14: CUDA, cont. Execution Model and Memory Hierarchy October 27, 2011.
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 19: Atomic.
L17: CUDA, cont. October 28, Final Project Purpose: -A chance to dig in deeper into a parallel programming model and explore concepts. -Present.
CS 179: GPU Programming Lecture 7. Week 3 Goals: – More involved GPU-accelerable algorithms Relevant hardware quirks – CUDA libraries.
© David Kirk/NVIDIA and Wen-mei W
© David Kirk/NVIDIA and Wen-mei W. Hwu, University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 12 Parallel Computation.
Single Instruction Multiple Threads
CS/EE 217 – GPU Architecture and Parallel Programming
ECE 498AL Lectures 8: Bank Conflicts and Sample PTX code
ECE408 Fall 2015 Applied Parallel Programming Lecture 11 Parallel Computation Patterns – Reduction Trees © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al,
CS 179: GPU Programming Lecture 7.
L18: CUDA, cont. Memory Hierarchy and Examples
CUDA Parallelism Model
CS/EE 217 – GPU Architecture and Parallel Programming
Parallel Computation Patterns (Scan)
Parallel Computation Patterns (Reduction)
ECE408 Applied Parallel Programming Lecture 14 Parallel Computation Patterns – Parallel Prefix Sum (Scan) Part-2 © David Kirk/NVIDIA and Wen-mei W.
L6: Memory Hierarchy Optimization IV, Bandwidth Optimization
ECE 498AL Lecture 15: Reductions and Their Implementation
ECE 498AL Lecture 10: Control Flow
ECE 498AL Spring 2010 Lecture 10: Control Flow
CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming
Presentation transcript:

L8: Control Flow CS6963

Administrative Next assignment on the website – Description at end of class – Due Wednesday, Feb. 17, 5PM (done?) – Use handin program on CADE machines “handin cs6963 lab2 ” Mailing lists – Please use for all questions suitable for the whole class Feel free to answer your classmates questions! – Please use for questions to Protonu and me CS L8: Control Flow

Administrative Grad lab, Linux machines: arctic.cs.utah.edu arctic gbasin.cs.utah.edu gbasin redrock.cs.utah.edu redrock gobi.cs.utah.edu gobi sahara.cs.utah.edu sahara mojave.cs.utah.edu mojave 3 L8: Control Flow

Outline Recall SIMD Execution Model – Impact of control flow Improving Control Flow Performance – Organize computation into warps with same control flow path – Avoid control flow by modifying computation – Tests for aggregate behavior (warp voting) Read (a little) about this: Kirk and Hwu, Ch. 5 NVDIA Programming Guide, and B L8: Control Flow CS6963

A Very Simple Execution Model No branch prediction – Just evaluate branch targets and wait for resolution – But wait is only a small number of cycles once data is loaded from global memory No speculation – Only execute useful instructions 5 L8: Control Flow CS6963

SIMD Execution of Control Flow Control flow example if (threadIdx >= 2) { out[threadIdx] += 100; } else { out[threadIdx] += 10; } P0P0 P0P0 Instruction Unit Instruction Unit P!P! P!P! P M-1 Reg... Memory Reg compare threadIdx,2 6 L8: Control Flow CS6963

SIMD Execution of Control Flow Control flow example if (threadIdx.x >= 2) { out[threadIdx.x] += 100; } else { out[threadIdx.x] += 10; } P0P0 P0P0 Instruction Unit Instruction Unit P!P! P!P! P M-1 Reg... Memory Reg /* Condition code cc = true branch set by predicate execution */ (CC) LD R5, &(out+threadIdx.x) (CC) ADD R5, R5, 100 (CC) ST R5, &(out+threadIdx.x) X X ✔✔ 7 L8: Control Flow CS6963

SIMD Execution of Control Flow Control flow example if (threadIdx >= 2) { out[threadIdx] += 100; } else { out[threadIdx] += 10; } P0P0 P0P0 Instruction Unit Instruction Unit P!P! P!P! P M-1 Reg... Memory Reg /* possibly predicated using CC */ (not CC) LD R5, &(out+threadIdx) (not CC) ADD R5, R5, 10 (not CC) ST R5, &(out+threadIdx) ✔ ✔ XX 8 L8: Control Flow CS6963

Terminology Divergent paths – Different threads within a warp take different control flow paths within a kernel function – N divergent paths in a warp? An N-way divergent warp is serially issued over the N different paths using a hardware stack and per-thread predication logic to only write back results from the threads taking each divergent path. Performance decreases by about a factor of N 9 L8: Control Flow CS6963

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign How thread blocks are partitioned Thread blocks are partitioned into warps – Thread IDs within a warp are consecutive and increasing – Warp 0 starts with Thread ID 0 Partitioning is always the same – Thus you can use this knowledge in control flow – However, the exact size of warps may change from generation to generation – (Covered next) However, DO NOT rely on any ordering between warps – If there are any dependences between threads, you must __syncthreads() to get correct results 10 L8: Control Flow

First Level of Defense: Avoid Control Flow Clever example from MPM No need to test for divide by 0 error, and slight delta does not impact results Add small constant to mass so that velocity calculation never divides by zero Add small constant to mass so that velocity calculation never divides by zero 11 L8: Control Flow CS6963

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign Control Flow Instructions A common case: avoid divergence when branch condition is a function of thread ID – Example with divergence: If (threadIdx.x > 2) { } This creates two different control paths for threads in a block Branch granularity < warp size; threads 0 and 1 follow different path than the rest of the threads in the first warp – Example without divergence: If (threadIdx.x / WARP_SIZE > 2) { } Also creates two different control paths for threads in a block Branch granularity is a whole multiple of warp size; all threads in any given warp follow the same path 12 L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign A Vector Parallel Reduction Example (related to “count 6” example Assume an in-place reduction using shared memory – The original vector is in device global memory – The shared memory is used to hold a partial sum vector – Each iteration brings the partial sum vector closer to the final sum – The final solution will be in element 0 13 L8: Control Flow

How to Accumulate Result in Shared Memory In original implementation (Lecture 1), we collected per-thread results into d_out[threadIdx.x]. In updated implementation (Lecture 3), we collected per-block results into d_out[0] for a single block, thus serializing the accumulation computation on the GPU. Suppose we want to exploit some parallelism in this accumulation part, which will be particularly important to performance as we scale the number of threads. A common idiom for reduction computations is to use a tree- structured results-gathering phase, where independent threads collect their results in parallel. Assume SIZE=16 and BLOCKSIZE(elements computed per thread)=4. Your job is to write this version of the reduction in CUDA. You can start with the sample code, but adjust the problem size to be larger: #define SIZE 256 #define BLOCKSIZE 32 CS L8: Control Flow

Recall: Serialized Gathering of Results on GPU for “Count 6” __global__ void compute(int *d_in, int *d_out) { d_out[threadIdx.x] = 0; for (i=0; i<SIZE/BLOCKSIZE; i++) { int val = d_in[i*BLOCKSIZE + threadIdx.x]; d_out[threadIdx.x] += compare(val, 6); } CS6963 __global__ void compute(int *d_in, int *d_out, int *d_sum) { d_out[threadIdx.x] = 0; for (i=0; i<SIZE/BLOCKSIZE; i++) { int val = d_in[i*BLOCKSIZE + threadIdx.x]; d_out[threadIdx.x] += compare(val, 6); } __syncthreads(); if (threadIdx.x == 0) { for 0..BLOCKSIZE-1 *d_sum += d_out[i]; } 15 L8: Control Flow

Tree-Structured Computation out[0] += out[2] out[0] += out[1] out[2] += out[3] out[0]out[1]out[2]out[3] Tree-structured results-gathering phase, where independent threads collect their results in parallel. Assume SIZE=16 and BLOCKSIZE(elements computed per thread)=4. CS6963

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign A possible implementation for just the reduction unsigned int t = threadIdx.x; for (unsigned int stride = 1; stride < blockDim.x; stride *= 2) { __syncthreads(); if (t % (2*stride) == 0) d_out[t] += d_out[t+stride]; } 17 L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign Vector Reduction with Branch Divergence Array elements iterations Thread 0Thread 8Thread 2Thread 4Thread 6Thread L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign Some Observations In each iteration, two control flow paths will be sequentially traversed for each warp – Threads that perform addition and threads that do not – Threads that do not perform addition may cost extra cycles depending on the implementation of divergence No more than half of threads will be executing at any time – All odd index threads are disabled right from the beginning! – On average, less than ¼ of the threads will be activated for all warps over time. – After the 5 th iteration, entire warps in each block will be disabled, poor resource utilization but no divergence. This can go on for a while, up to 4 more iterations (512/32=16= 2 4 ), where each iteration only has one thread activated until all warps retire 19 L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign What’s Wrong? unsigned int t = threadIdx.x; for (unsigned int stride = 1; stride < blockDim.x; stride *= 2) { __syncthreads(); if (t % (2*stride) == 0) d_out[t] += d_out[t+stride]; } 20 L8: Control Flow BAD: Divergence due to interleaved branch decisions

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign A better implementation unsigned int t = threadIdx.x; for (unsigned int stride = blockDim.x >> 1; stride >= 1; stride >> 1) { __syncthreads(); if (t < stride) d_out[t] += d_out[t+stride]; } 21 L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign Thread 0 No Divergence until < 16 sub-sums 0123… L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign A shared memory implementation Assume we have already loaded array into __shared__ float partialSum[]; unsigned int t = threadIdx.x; for (unsigned int stride = blockDim.x >> 1; stride >= 1; stride >> 1) { __syncthreads(); if (t < stride) partialSum[t] += partialSum[t+stride]; } 23 L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign Some Observations About the New Implementation Only the last 5 iterations will have divergence Entire warps will be shut down as iterations progress – For a 512-thread block, 4 iterations to shut down all but one warp in each block – Better resource utilization, will likely retire warps and thus blocks faster Recall, no bank conflicts either 24 L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign Predicated Execution Concept LDR r1,r2,0 If p1 is TRUE, instruction executes normally If p1 is FALSE, instruction treated as NOP 25 L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign Predication Example : if (x == 10) c = c + 1; : LDR r5, X p1 <- r5 eq 10 LDR r1 <- C ADD r1, r1, 1 STR r1 -> C : 26 L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign B A C D ABCDABCD Predication can be very helpful for if-else 27 L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign If-else example : p1,p2 <- r5 eq 10 inst 1 from B inst 2 from B : inst 1 from C inst 2 from C : p1,p2 <- r5 eq 10 inst 1 from B inst 1 from C inst 2 from B inst 2 from C : schedule The cost is extra instructions will be issued each time the code is executed. However, there is no branch divergence. 28 L8: Control Flow

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign Instruction Predication in G80 Comparison instructions set condition codes (CC) Instructions can be predicated to write results only when CC meets criterion (CC != 0, CC >= 0, etc.) Compiler tries to predict if a branch condition is likely to produce many divergent warps – If guaranteed not to diverge: only predicates if < 4 instructions – If not guaranteed: only predicates if < 7 instructions May replace branches with instruction predication ALL predicated instructions take execution cycles – Those with false conditions don’t write their output Or invoke memory loads and stores – Saves branch instructions, so can be cheaper than serializing divergent paths (for small # instructions) 29 L8: Control Flow

Warp Vote Functions (Compute Capability > 1.2) Can test whether condition on all threads in a warp evaluates to same value int __all(int predicate): evaluates predicate for all threads of a warp and returns non-zero iff predicate evaluates to non-zero for all of them. int __any(int predicate): evaluates predicate for all threads of a warp and returns non-zero iff predicate evaluates to non-zero for any of them. 30 L8: Control Flow CS6963

Using Warp Vote Functions Can tailor code for when none/all take a branch. Eliminate overhead of branching and predication. Particularly useful for codes where most threads will be the same – Example 1: looking for something unusual in image data – Example 2: dealing with boundary conditions 31 L8: Control Flow CS6963

Summary of Lecture Impact of control flow on performance – Due to SIMD execution model for threads Execution model/code generated – Stall based on CC value (for long instr sequences) – Predicated code (for short instr sequences) Strategies for avoiding control flow – Eliminate divide by zero test (MPM) – Warp vote function Group together similar control flow paths into warps – Example: “tree” reduction 32 L8: Control Flow CS6963

Next Time Semester project description Two assignments – Next programming assignment – Project proposal 33 L8: Control Flow