L14: CUDA, cont. Execution Model and Memory Hierarchy October 27, 2011.

Slides:



Advertisements
Similar presentations
Intermediate GPGPU Programming in CUDA
Advertisements

Acceleration of the Smith– Waterman algorithm using single and multiple graphics processors Author : Ali Khajeh-Saeed, Stephen Poole, J. Blair Perot. Publisher:
1 Threading Hardware in G80. 2 Sources Slides by ECE 498 AL : Programming Massively Parallel Processors : Wen-Mei Hwu John Nickolls, NVIDIA.
CS 193G Lecture 5: Performance Considerations. But First! Always measure where your time is going! Even if you think you know where it is going Start.
L15: Review for Midterm. Administrative Project proposals due today at 5PM (hard deadline) – handin cs6963 prop March 31, MIDTERM in class L15: Review.
L13: Review for Midterm. Administrative Project proposals due Friday at 5PM (hard deadline) No makeup class Friday! March 23, Guest Lecture Austin Robison,
L9: Control Flow CS6963. Administrative Project proposals Due 5PM, Friday, March 13 (hard deadline) MPM Sequential code and information posted on website.
L6: Memory Hierarchy Optimization III, Bandwidth Optimization CS6963.
Parallel Programming using CUDA. Traditional Computing Von Neumann architecture: instructions are sent from memory to the CPU Serial execution: Instructions.
L8: Memory Hierarchy Optimization, Bandwidth CS6963.
Programming with CUDA WS 08/09 Lecture 9 Thu, 20 Nov, 2008.
CS6963 L4: Hardware Execution Model and Overview January 26, 2009.
L6: Memory Hierarchy Optimization IV, Bandwidth Optimization CS6963.
L8: Control Flow CS6963. Administrative Next assignment on the website – Description at end of class – Due Wednesday, Feb. 17, 5PM (done?) – Use handin.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 7: Threading Hardware in G80.
CuMAPz: A Tool to Analyze Memory Access Patterns in CUDA
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Memory Hardware in G80.
CS6963 L18: Introduction to CUDA November 5, 2009.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
L19: Advanced CUDA Issues November 10, Administrative CLASS CANCELLED, TUESDAY, NOVEMBER 17 Guest Lecture, November 19, Ganesh Gopalakrishnan Thursday,
L8: Writing Correct Programs, cont. and Control Flow L8: Control Flow CS6235.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395 Winter 2014 Lecture 17 Introduction to Accelerator.
CUDA Optimizations Sathish Vadhiyar Parallel Programming.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 9: Memory Hardware in G80.
1 ECE 8823A GPU Architectures Module 5: Execution and Resources - I.
09/21/2010CS4961 CS4961 Parallel Programming Lecture 9: Red/Blue and Introduction to Data Locality Mary Hall September 21,
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow/ Thread Execution.
CUDA Performance Patrick Cozzi University of Pennsylvania CIS Fall
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Basic Parallel Programming Concepts Computational.
CUDA - 2.
CUDA Performance Considerations (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Threading Hardware in G80.
Jie Chen. 30 Multi-Processors each contains 8 cores at 1.4 GHz 4GB GDDR3 memory offers ~100GB/s memory bandwidth.
CUDA Performance Considerations (2 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
L17: CUDA, cont. Execution Model and Memory Hierarchy November 6, 2012.
1)Leverage raw computational power of GPU  Magnitude performance gains possible.
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
© David Kirk/NVIDIA, Wen-mei W. Hwu, and John Stratton, ECE 498AL, University of Illinois, Urbana-Champaign 1 CUDA Lecture 7: Reductions and.
Efficient Parallel CKY Parsing on GPUs Youngmin Yi (University of Seoul) Chao-Yue Lai (UC Berkeley) Slav Petrov (Google Research) Kurt Keutzer (UC Berkeley)
Programming with CUDA WS 08/09 Lecture 10 Tue, 25 Nov, 2008.
L8: Control Flow CS6235. Administrative Next assignment available – Goals of assignment: – simple memory hierarchy management – block-thread decomposition.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 408, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 408, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Performance.
1 2D Convolution, Constant Memory and Constant Caching © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois,
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
L3: Memory Hierarchy Optimization I, Locality and Data Placement CS6235 L3: Memory Hierarchy, 1.
L17: CUDA, cont. October 28, Final Project Purpose: -A chance to dig in deeper into a parallel programming model and explore concepts. -Present.
My Coordinates Office EM G.27 contact time:
1 ITCS 4/5010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 28, 2013 Branching.ppt Control Flow These notes will introduce scheduling control-flow.
CS 179: GPU Computing LECTURE 4: GPU MEMORY SYSTEMS 1.
CUDA programming Performance considerations (CUDA best practices)
CS 179: GPU Computing LECTURE 2: MORE BASICS. Recap Can use GPU to solve highly parallelizable problems Straightforward extension to C++ ◦Separate CUDA.
Computer Engg, IIT(BHU)
EECE571R -- Harnessing Massively Parallel Processors ece
GPU Memory Details Martin Kruliš by Martin Kruliš (v1.1)
ECE 498AL Lectures 8: Bank Conflicts and Sample PTX code
ECE 498AL Spring 2010 Lectures 8: Threading & Memory Hardware in G80
Lecture 5: GPU Compute Architecture
L18: CUDA, cont. Memory Hierarchy and Examples
Lecture 5: GPU Compute Architecture for the last time
L4: Memory Hierarchy Optimization II, Locality and Data Placement
L15: CUDA, cont. Memory Hierarchy and Examples
L6: Memory Hierarchy Optimization IV, Bandwidth Optimization
Mattan Erez The University of Texas at Austin
Mattan Erez The University of Texas at Austin
6- General Purpose GPU Programming
CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming
CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming
Presentation transcript:

L14: CUDA, cont. Execution Model and Memory Hierarchy October 27, 2011

Programming Assignment 3, Due 11:59PM Nov. 7 Purpose: -Synthesize the concepts you have learned so far -Data parallelism, locality and task parallelism -Image processing computation adapted from a real application Turn in using handin program on CADE machines -Handin cs4961 proj3 -Include code + README Three Parts: 1.Locality optimization (50%): Improve performance using locality optimizations only (no parallelism) 2.Data parallelism (20%): Improve performance using locality optimizations plus data parallel constructs in OpenMP 3.Task parallelism (30%): Code will not be faster by adding task parallelism, but you can improve time to first result. Use task parallelism in conjunction with data parallelism per my message from the previous assignment.

Here’s the code … Initialize th[i][j] = 0 … /* compute array convolution */ for(m = 0; m < IMAGE_NROWS - TEMPLATE_NROWS + 1; m++){ for(n = 0; n < IMAGE_NCOLS - TEMPLATE_NCOLS + 1; n++){ for(i=0; i < TEMPLATE_NROWS; i++){ for(j=0; j < TEMPLATE_NCOLS; j++){ if(mask[i][j] != 0) { th[m][n] += image[i+m][j+n]; } /* scale array with bright count and template bias */ … th[i][j] = th[i][j] * bc – bias;

Things to think about Beyond the assigned work, how does parallelization affect the profitability of locality optimizations? What happens if you make the IMAGE SIZE larger (1024x1024 or even 2048x2048)? -You’ll need to use “unlimit stacksize” to run these. What happens if you repeat this experiment on a different architecture with a different memory hierarchy (in particular, smaller L2 cache)? How does SSE or other multimedia extensions affect performance and optimization selection?

Reminder About Final Project Purpose: -A chance to dig in deeper into a parallel programming model and explore concepts. -Research experience -Freedom to pick your own problem and solution, get feedback -Thrill of victory, agony of defeat -Communication skills -Present results to communicate technical ideas Write a non-trivial parallel program that combines two parallel programming languages/models. In some cases, just do two separate implementations. -OpenMP + SSE -OpenMP + CUDA (but need to do this in separate parts of the code) -MPI + OpenMP -MPI + SSE -MPI + CUDA Present results in a poster session on the last day of class

Example Projects Look in the textbook or look on-line -Chapter 6: N-body, Tree search -Chapters 3 and 5: Sorting -Image and signal processing algorithms -Graphics algorithms -Stencil computations -FFT -Graph algorithms -Other domains… Must change it up in some way from text -Different language/strategy CS4961

Details and Schedule 2-3 person projects -Let me know if you need help finding a team Ok to combine with project for other class, but expectations will be higher and professors will discuss Each group must talk to me about their project between now and November 10 -Before/after class, during office hours or by appointment -Bring written description of project, slides are fine -Must include your plan and how the work will be shared across the team I must sign off by November 22 (in writing) Dry run on December 6 Poster presentation on December 8

Strategy A lesson in research -Big vision is great, but make sure you have an evolutionary plan where success comes in stages -Sometimes even the best students get too ambitious and struggle -Parallel programming is hard -Some of you will pick problems that don’t speed up well and we’ll need to figure out what to do -There are many opportunities to recover if you have problems -I’ll check in with you a few times and redirect if needed -Feel free to ask for help -Optional final report can boost your grade, particularly if things are not working on the last day of classes CS4961

Shared Memory Architecture 2: Sun Ultrasparc T2 Niagara (water) 08/30/2011CS Proc FPU Proc FPU Proc FPU Proc FPU Proc FPU Proc FPU Proc FPU Proc FPU Memory Controller 512KB L2 Cache 512KB L2 Cache Memory Controller 512KB L2 Cache 512KB L2 Cache Memory Controller 512KB L2 Cache 512KB L2 Cache Memory Controller 512KB L2 Cache 512KB L2 Cache Full Cross-Bar (Interconnect)

More on Niagara 08/30/2011CS Target applications are server-class, business operations Characterization: Floating point? Array-based computation? Support for VIS 2.0 SIMD instruction set 64-way multithreading (8-way per processor, 8 processors)

The 2 nd fastest computer in the world What is its name? Where is it located? How many processors does it have? What kind of processors? How fast is it? Tianhe-1A National Supercomputer Center in Tianjin ~21,000 processor chips (14,000 CPUs and 7,000 Tesla Fermis) CPU ? NVIDIA Fermi Petaflop/second 2.5 quadrillion operations/s 1 x See CS496111

Outline Reminder of CUDA Architecture Execution Model -Brief mention of control flow Heterogeneous Memory Hierarchy -Locality through data placement -Maximizing bandwidth through global memory coalescing -Avoiding memory bank conflicts Tiling and its Applicability to CUDA Code Generation This lecture includes slides provided by: Wen-mei Hwu (UIUC) and David Kirk (NVIDIA) see and Austin Robison (NVIDIA) 11/05/09

Reading David Kirk and Wen-mei Hwu manuscript (in progress) - from-NVIDIA-and-Prof-Wen-mei-Hwu-pdf.html CUDA Manual, particularly Chapters 2 and 4 (download from nvidia.com/cudazone) Nice series from Dr. Dobbs Journal by Rob Farber (link on class website) -

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign Hardware Implementation: A Set of SIMD Multiprocessors A device has a set of multiprocessors Each multiprocessor is a set of 32-bit processors with a Single Instruction Multiple Data architecture -Shared instruction unit At each clock cycle, a multiprocessor executes the same instruction on a group of threads called a warp The number of threads in a warp is the warp size Device Multiprocessor N Multiprocessor 2 Multiprocessor 1 Instruction Unit Processor 1 … Processor 2 Processor M

Hardware Execution Model I. SIMD Execution of warpsize=M threads (from single block) – Result is a set of instruction streams roughly equal to # blocks in thread divided by warpsize II. Multithreaded Execution across different instruction streams within block – Also possibly across different blocks if there are more blocks than SMs III. Each block mapped to single SM – No direct interaction across SMs Device Multiprocessor N Multiprocessor 2 Multiprocessor 1 Device memory Shared Memory Instruction Unit Processor 1 Registers … Processor 2 Registers Processor M Registers Constant Cache Texture Cache

Example SIMD Execution “Count 6” kernel function d_out[threadIdx.x] = 0; for (int i=0; i<SIZE/BLOCKSIZE; i++) { int val = d_in[i*BLOCKSIZE + threadIdx.x]; d_out[threadIdx.x] += compare(val, 76; } P0P0 P0P0 Instruction Unit Instruction Unit P!P! P!P! P M-1 Reg... Memory Reg

threadIdx Example SIMD Execution “Count 6” kernel function d_out[threadIdx.x] = 0; for (int i=0; i<SIZE/BLOCKSIZE; i++) { int val = d_in[i*BLOCKSIZE + threadIdx.x]; d_out[threadIdx.x] += compare(val, 6); } P0P0 P0P0 Instruction Unit Instruction Unit P!P! P!P! P M-1... Memory Reg LDC 0, &(dout+ threadIdx) threadIdx + ++ &dout Each “core” initializes data from addr based on its own threadIdx

Example SIMD Execution “Count 6” kernel function d_out[threadIdx.x] = 0; for (int i=0; i<SIZE/BLOCKSIZE; i++) { int val = d_in[i*BLOCKSIZE + threadIdx.x]; d_out[threadIdx.x] += compare(val, 6); } P0P0 P0P0 Instruction Unit Instruction Unit P!P! P!P! P M-1... Memory Reg /* int i=0; */ LDC 0, R3 Each “core” initializes its own R

Example SIMD Execution “Count 6” kernel function d_out[threadIdx.x] = 0; for (int i=0; i<SIZE/BLOCKSIZE; i++) { int val = d_in[i*BLOCKSIZE + threadIdx.x]; d_out[threadIdx.x] += compare(val, 6); } P0P0 P0P0 Instruction Unit Instruction Unit P!P! P!P! P M-1 Reg... Memory Reg /* i*BLOCKSIZE + threadIdx */ LDC BLOCKSIZE,R2 MUL R1, R3, R2 ADD R4, R1, RO Each “core” performs same operations from its own registers Etc.

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign SM Warp Scheduling SM hardware implements zero- overhead Warp scheduling – Warps whose next instruction has its operands ready for consumption are eligible for execution – Eligible Warps are selected for execution on a prioritized scheduling policy – All threads in a Warp execute the same instruction when selected 4 clock cycles needed to dispatch the same instruction for all threads in a Warp in G80 – If one global memory access is needed for every 4 instructions – A minimal of 13 Warps are needed to fully tolerate 200-cycle memory latency warp 8 instruction 11 SM multithreaded Warp scheduler warp 1 instruction 42 warp 3 instruction 95 warp 8 instruction time warp 3 instruction 96

SIMD Execution of Control Flow Control flow example if (threadIdx >= 2) { out[threadIdx] += 100; } else { out[threadIdx] += 10; } P0P0 P0P0 Instruction Unit Instruction Unit P!P! P!P! P M-1 Re g... Memory Re g compare threadIdx,2

SIMD Execution of Control Flow Control flow example if (threadIdx.x >= 2) { out[threadIdx.x] += 100; } else { out[threadIdx.x] += 10; } P0P0 P0P0 Instructio n Unit Instructio n Unit P!P! P!P! P M-1 Re g... Memory Re g /* possibly predicated using CC */ (CC) LD R5, &(out+threadIdx.x) (CC) ADD R5, R5, 100 (CC) ST R5, &(out+threadIdx.x) X X ✔✔

SIMD Execution of Control Flow Control flow example if (threadIdx >= 2) { out[threadIdx] += 100; } else { out[threadIdx] += 10; } P0P0 P0P0 Instructio n Unit Instructio n Unit P!P! P!P! P M-1 Re g... Memory Re g /* possibly predicated using CC */ (not CC) LD R5, &(out+threadIdx) (not CC) ADD R5, R5, 10 (not CC) ST R5, &(out+threadIdx) ✔ ✔ XX

Terminology Divergent paths -Different threads within a warp take different control flow paths within a kernel function -N divergent paths in a warp? -An N-way divergent warp is serially issued over the N different paths using a hardware stack and per-thread predication logic to only write back results from the threads taking each divergent path. -Performance decreases by about a factor of N

Hardware Implementation: Memory Architecture The local, global, constant, and texture spaces are regions of device memory (DRAM) Each multiprocessor has: -A set of 32-bit registers per processor -On-chip shared memory -Where the shared memory space resides -A read-only constant cache -To speed up access to the constant memory space -A read-only texture cache -To speed up access to the texture memory space -Data cache (Fermi only) Global, constant, texture memories © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign Device Multiprocessor N Multiprocessor 2 Multiprocessor 1 Device memory Shared Memory Instruction Unit Processor 1 Registers … Processor 2 Registers Processor M Registers Constant Cache Texture Cache Data Cache, Fermi only

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign Programmer’s View: Memory Spaces Each thread can: -Read/write per-thread registers -Read/write per-thread local memory -Read/write per-block shared memory -Read/write per-grid global memory -Read only per-grid constant memory -Read only per-grid texture memory Grid Constant Memory Texture Memory Global Memory Block (0, 0) Shared Memory Local Memory Thread (0, 0) Registers Local Memory Thread (1, 0) Registers Block (1, 0) Shared Memory Local Memory Thread (0, 0) Registers Local Memory Thread (1, 0) Registers Host The host can read/write global, constant, and texture memory

Targets of Memory Hierarchy Optimizations Reduce memory latency – The latency of a memory access is the time (usually in cycles) between a memory request and its completion Maximize memory bandwidth – Bandwidth is the amount of useful data that can be retrieved over a time interval Manage overhead – Cost of performing optimization (e.g., copying) should be less than anticipated gain 27

Global Memory Accesses Each thread issues memory accesses to data types of varying sizes, perhaps as small as 1 byte entities Given an address to load or store, memory returns/updates “segments” of either 32 bytes, 64 bytes or 128 bytes Maximizing bandwidth: -Operate on an entire 128 byte segment for each memory transfer

Understanding Global Memory Accesses Memory protocol for compute capability 1.2* (CUDA Manual ) Start with memory request by smallest numbered thread. Find the memory segment that contains the address (32, 64 or 128 byte segment, depending on data type) Find other active threads requesting addresses within that segment and coalesce Reduce transaction size if possible Access memory and mark threads as “inactive” Repeat until all threads in half-warp are serviced *Includes Tesla and GTX platforms

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign M 2,0 M 1,1 M 1,0 M 0,0 M 0,1 M 3,0 M 2,1 M 3,1 Memory Layout of a Matrix in C M 2,0 M 1,0 M 0,0 M 3,0 M 1,1 M 0,1 M 2,1 M 3,1 M 1,2 M 0,2 M 2,2 M 3,2 M 1,2 M 0,2 M 2,2 M 3,2 M 1,3 M 0,3 M 2,3 M 3,3 M 1,3 M 0,3 M 2,3 M 3,3 M T1T1 T2T2 T3T3 T4T4 Time Period 1 T1T1 T2T2 T3T3 T4T4 Time Period 2 Access direction in Kernel code …

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign M 2,0 M 1,1 M 1,0 M 0,0 M 0,1 M 3,0 M 2,1 M 3,1 Memory Layout of a Matrix in C M 2,0 M 1,0 M 0,0 M 3,0 M 1,1 M 0,1 M 2,1 M 3,1 M 1,2 M 0,2 M 2,2 M 3,2 M 1,2 M 0,2 M 2,2 M 3,2 M 1,3 M 0,3 M 2,3 M 3,3 M 1,3 M 0,3 M 2,3 M 3,3 M T1T1 T2T2 T3T3 T4T4 Time Period 1 T1T1 T2T2 T3T3 T4T4 Time Period 2 Access direction in Kernel code …

Now Let’s Look at Shared Memory Common Programming Pattern (5.1.2 of CUDA manual) -Load data into shared memory -Synchronize (if necessary) -Operate on data in shared memory -Synchronize (if necessary) -Write intermediate results to global memory -Repeat until done Shared memory Global memory Familiar concept???

Mechanics of Using Shared Memory __shared__ type qualifier required Must be allocated from global/device function, or as “extern” Examples: extern __shared__ float d_s_array[]; /* a form of dynamic allocation */ /* MEMSIZE is size of per-block */ /* shared memory*/ __host__ void outerCompute() { compute >>(); } __global__ void compute() { d_s_array[i] = …; } __global__ void compute2() { __shared__ float d_s_array[M]; /* create or copy from global memory */ d_s_array[j] = …; /* write result back to global memory */ d_g_array[j] = d_s_array[j]; }

Bandwidth to Shared Memory: Parallel Memory Accesses Consider each thread accessing a different location in shared memory Bandwidth maximized if each one is able to proceed in parallel Hardware to support this -Banked memory: each bank can support an access on every memory cycle

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign Bank Addressing Examples No Bank Conflicts -Linear addressing stride == 1 No Bank Conflicts -Random 1:1 Permutation Bank 15 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Thread 15 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 15 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Thread 15 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 36 Bank Addressing Examples 2-way Bank Conflicts -Linear addressing stride == 2 8-way Bank Conflicts -Linear addressing stride == 8 Thread 11 Thread 10 Thread 9 Thread 8 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 15 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Thread 15 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 9 Bank 8 Bank 15 Bank 7 Bank 2 Bank 1 Bank 0 x8

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign How addresses map to banks on G80 (older technology) Each bank has a bandwidth of 32 bits per clock cycle Successive 32-bit words are assigned to successive banks G80 has 16 banks -So bank = address % 16 -Same as the size of a half-warp -No bank conflicts between different half-warps, only within a single half-warp

© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign Shared memory bank conflicts Shared memory is as fast as registers if there are no bank conflicts The fast case: -If all threads of a half-warp access different banks, there is no bank conflict -If all threads of a half-warp access the identical address, there is no bank conflict (broadcast) The slow case: -Bank Conflict: multiple threads in the same half-warp access the same bank -Must serialize the accesses -Cost = max # of simultaneous accesses to a single bank

Summary of Lecture A deeper probe of performance issues -Execution model -Control flow -Heterogeneous memory hierarchy -Locality and bandwidth -Tiling for CUDA code generation