CS6963 L1: Introduction to CS6963 and CUDA January 20, 2010.

Slides:



Advertisements
Similar presentations
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
Advertisements

GPU programming: CUDA Acknowledgement: the lecture materials are based on the materials in NVIDIA teaching center CUDA course materials, including materials.
Programming with CUDA, WS09 Waqar Saleem, Jens Müller Programming with CUDA and Parallel Algorithms Waqar Saleem Jens Müller.
L9: Control Flow CS6963. Administrative Project proposals Due 5PM, Friday, March 13 (hard deadline) MPM Sequential code and information posted on website.
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 19, 2011 Emergence of GPU systems and clusters for general purpose High Performance Computing.
CUDA Programming Lei Zhou, Yafeng Yin, Yanzhi Ren, Hong Man, Yingying Chen.
L8: Memory Hierarchy Optimization, Bandwidth CS6963.
L3: Writing Correct Programs. Administrative First assignment out, due Friday at 5PM (extended) – Use handin on CADE machines to submit “ handin cs6963.
CS6235 L1: Introduction CS 6235: Parallel Programming for Many-Core Architectures January 7, 2013 L1: Course/CUDA Introduction.
© David Kirk/NVIDIA and Wen-mei W. Hwu, , SSL 2014, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
Accelerating SQL Database Operations on a GPU with CUDA Peter Bakkum & Kevin Skadron The University of Virginia GPGPU-3 Presentation March 14, 2010.
Training Program on GPU Programming with CUDA 31 st July, 7 th Aug, 14 th Aug 2011 CUDA Teaching UoM.
COLLABORATIVE EXECUTION ENVIRONMENT FOR HETEROGENEOUS PARALLEL SYSTEMS Aleksandar Ili´c, Leonel Sousa 2010 IEEE International Symposium on Parallel & Distributed.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 7: Threading Hardware in G80.
GPU Programming David Monismith Based on notes taken from the Udacity Parallel Programming Course.
David Luebke NVIDIA Research GPU Computing: The Democratization of Parallel Computing.
(1) ECE 8823: GPU Architectures Sudhakar Yalamanchili School of Electrical and Computer Engineering Georgia Institute of Technology NVIDIA Keplar.
CS6963 L18: Introduction to CUDA November 5, 2009.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, 2008 Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors CUDA.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
L19: Advanced CUDA Issues November 10, Administrative CLASS CANCELLED, TUESDAY, NOVEMBER 17 Guest Lecture, November 19, Ganesh Gopalakrishnan Thursday,
High Performance Computing with GPUs: An Introduction Krešimir Ćosić, Thursday, August 12th, LSST All Hands Meeting 2010, Tucson, AZ GPU Tutorial:
CUDA All material not from online sources/textbook copyright © Travis Desell, 2012.
+ CUDA Antonyus Pyetro do Amaral Ferreira. + The problem The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now.
ME964 High Performance Computing for Engineering Applications CUDA Memory Model & CUDA API Sept. 16, 2008.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395 Winter 2014 Lecture 17 Introduction to Accelerator.
CS6235 L1: Introduction CS 6235: Parallel Programming for Many-Core Architectures January 9, 2012 L1: Course/CUDA Introduction.
Introduction to and CUDA © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 University of Illinois, Urbana-Champaign.
GPU Architecture and Programming
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow/ Thread Execution.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Basic Parallel Programming Concepts Computational.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
1 Ceng 545 GPU Computing. Grading 2 Midterm Exam: 20% Homeworks: 40% Demo/knowledge: 25% Functionality: 40% Report: 35% Project: 40% Design Document:
Training Program on GPU Programming with CUDA 31 st July, 7 th Aug, 14 th Aug 2011 CUDA Teaching UoM.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Threading Hardware in G80.
Jie Chen. 30 Multi-Processors each contains 8 cores at 1.4 GHz 4GB GDDR3 memory offers ~100GB/s memory bandwidth.
Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS Spring 2011 * Where n is 2 or 3.
GPU Programming Shirley Moore CPS 5401 Fall 2013
L16: Introduction to CUDA November 1, 2012 CS4230.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
Parallel Programming Basics  Things we need to consider:  Control  Synchronization  Communication  Parallel programming languages offer different.
Introduction to CUDA CAP 4730 Spring 2012 Tushar Athawale.
1 GPU programming Dr. Bernhard Kainz. 2 Dr Bernhard Kainz Overview About myself Motivation GPU hardware and system architecture GPU programming languages.
Lecture 8 : Manycore GPU Programming with CUDA Courtesy : SUNY-Stony Brook Prof. Chowdhury’s course note slides are used in this lecture note.
© David Kirk/NVIDIA and Wen-mei W. Hwu, CS/EE 217 GPU Architecture and Programming Lecture 2: Introduction to CUDA C.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE 8823A GPU Architectures Module 2: Introduction.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 Graphic Processing Processors (GPUs) Parallel.
CS6963 L3: Data Partitioning and Memory Organization, Synchronization January 21, L3: Synchronization, Data & Memory.
1 The CUDA Programming Model © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL Spring 2010, University of Illinois, Urbana-Champaign.
Would'a, CUDA, Should'a. CUDA: Compute Unified Device Architecture OU Supercomputing Symposium Highly-Threaded HPC.
CS6963 L2: Introduction to CUDA January 14, L2:Introduction to CUDA.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 ECE408 / CS483 Applied Parallel Programming.
CS6963 L13: Application Case Study III: Molecular Visualization and Material Point Method.
Summer School s-Science with Many-core CPU/GPU Processors Lecture 2 Introduction to CUDA © David Kirk/NVIDIA and Wen-mei W. Hwu Braga, Portugal, June 14-18,
Introduction to CUDA Programming CUDA Programming Introduction Andreas Moshovos Winter 2009 Some slides/material from: UIUC course by Wen-Mei Hwu and David.
Computer Engg, IIT(BHU)
CS 179: GPU Programming Lecture 1: Introduction 1
CS427 Multicore Architecture and Parallel Computing
Basic CUDA Programming
Programming Massively Parallel Graphics Processors
CS 179: GPU Programming Lecture 1: Introduction 1
L13: Introduction to CUDA
Programming Massively Parallel Graphics Processors
L15: Introduction to CUDA
L1: Introduction to CS6963 and CUDA
6- General Purpose GPU Programming
Presentation transcript:

CS6963 L1: Introduction to CS6963 and CUDA January 20, 2010

Outline of Today’s Lecture Introductory remarks A brief motivation for the course Course plans Introduction to CUDA -Motivation for programming model -Presentation of syntax -Simple working example (also on website) Reading: -CUDA 2.3 Manual, particularly Chapters 2 and 4 -Massively Multicore Programming Book, Chapters 1 and 2 CS6963L1: Course/CUDA Introduction This lecture includes slides provided by: Wen-mei Hwu (UIUC) and David Kirk (NVIDIA) see

Last week: ACM Principles and Practice of Parallel Programming Conference (Bangalore) CS6963L1: Course/CUDA Introduction Recognize this person?

CS6963: Parallel Programming for GPUs, MW 10:45-12:05, MEB 3147 Website: Mailing lists: for open discussions on for communicating with Professor: Mary Hall MEB 3466, Office hours: T 11:00-11:40 AM, W 12:20-1:00 PM (usually), or by appointment Teaching Assistant: Protonu Basu, MEB 3419, Office hours: TH 10:00-12:00PM L1: Course/CUDA IntroductionCS6963

Administrative Mailing lists: for open discussions on for communicating with First assignment due Wednesday, January 27, 5PM -Your assignment is to simply add and multiply two vectors to get started writing programs in CUDA. In the regression test (in driver.c ). The addition and multiplication are coded into the functions, and the file (CMakeLists.txt) compiles and links. -Use handin on the CADE machines for all assignments -“handin cs6963 lab1 ” - The file should be a gzipped tar file of the CUDA program and output CS6963L1: Course/CUDA Introduction

Course Objectives Learn how to program “graphics” processors for general-purpose multi-core computing applications -Learn how to think in parallel and write correct parallel programs -Achieve performance and scalability through understanding of architecture and software mapping Significant hands-on programming experience -Develop real applications on real hardware Discuss the current parallel computing context -What are the drivers that make this course timely -Contemporary programming models and architectures, and where is the field going CS6963L1: Course/CUDA Introduction

Outcomes from Last Year’s Course Paper and poster at Symposium on Application Accelerators for High-Performance Computing (late April/early May submission deadline) -Poster: Assembling Large Mosaics of Electron Microscope Images using GPU - Kannan Venkataraju, Mark Kim, Dan Gerszewski, James R. Anderson, and Mary HallAssembling Large Mosaics of Electron Microscope Images using GPU -Paper: GPU Acceleration of the Generalized Interpolation Material Point Method GPU Acceleration of the Generalized Interpolation Material Point Method Wei-Fan Chiang, Michael DeLisi, Todd Hummel, Tyler Prete, Kevin Tew, Mary Hall, Phil Wallstedt, and James Guilkey Poster at NVIDIA Research Summit Poster #47 - Fu, Zhisong, University of Utah (United States) Solving Eikonal Equations on Triangulated Surface Mesh with CUDA Solving Eikonal Equations on Triangulated Surface Mesh with CUDA Posters at Industrial Advisory Board meeting Integrated into Masters theses and PhD dissertations Jobs and internships CS6963L1: Course/CUDA Introduction

Grading Criteria Homeworks and mini-projects (4): 30% Midterm test: 15% Project proposal: 10% Project design review: 10% Project presentation/demo 15% Project final report 20% CS6963L1: Course/CUDA Introduction

Primary Grade: Team Projects Some logistical issues: -2-3 person teams -Projects will start in late February Three parts: -(1) Proposal; (2) Design review; (3) Final report and demo Application code: -I will suggest a sample project, an area of future research interest. -Alternative applications must be approved by me (start early). CS6963L1: Course/CUDA Introduction

Collaboration Policy I encourage discussion and exchange of information between students. But the final work must be your own. -Do not copy code, tests, assignments or written reports. -Do not allow others to copy your code, tests, assignments or written reports. CS6963L1: Course/CUDA Introduction

Lab Information Primary lab “lab5” in WEB/ “lab6” in WEB Windows machines Accounts are supposed to be set up for all who are registered Contact with Secondary Tesla S1070 system in SCI (Linux) [soon] Tertiary Until we get to timing experiments, assignments can be completed on any machine running CUDA 2.3 (Linux, Windows, MAC OS) CS6963L1: Course/CUDA Introduction

NVIDIA Recognizes University Of Utah As A Cuda Center Of Excellence University of Utah is the Latest in a Growing List of Exceptional Schools Demonstrating Pioneering Work in Parallel (JULY 31, 2008—NVIDIA Corporation) Nvidia Tesla system: 240 cores per chip, 960 cores per unit, 32 units. Over 30,000 cores! Hosts are Intel Nehalems PCI+MPI between units A Few Words About Tesla System CS6963L1: Course/CUDA Introduction

Text and Notes 1.NVidia, CUDA Programming Guide, available from da_develop.html for CUDA 2.3 and Windows, Linux or MAC OS. 2.[Recommended] Programming Massively Parallel Processors, Wen-mei Hwu and David Kirk, available from http: //courses.ece.illinois.edu/ece498/al/Sy llabus.html (to be available from Morgan Kaufmann in about 2 weeks!) 3.[Additional] Grama, A. Gupta, G. Karypis, and V. Kumar, Introduction to Parallel Computing, 2nd Ed. (Addison- Wesley, 2003). 4.Additional readings associated with lectures. CS6963L1: Course/CUDA Introduction

A quiet revolution and potential build-up -Calculation: 367 GFLOPS vs. 32 GFLOPS -Memory Bandwidth: 86.4 GB/s vs. 8.4 GB/s -Until last year, programmed through graphics API -GPU in every PC and workstation – massive volume and potential impact GFLOPS G80 = GeForce 8800 GTX G71 = GeForce 7900 GTX G70 = GeForce 7800 GTX NV40 = GeForce 6800 Ultra NV35 = GeForce FX 5950 Ultra NV30 = GeForce FX 5800 Why Massively Parallel Processor © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign L1: Course/CUDA Introduction

Concept of GPGPU (General-Purpose Computing on GPUs) Idea: -Potential for very high performance at low cost -Architecture well suited for certain kinds of parallel applications (data parallel) -Demonstrations of X speedup over CPU Early challenges: -Architectures very customized to graphics problems (e.g., vertex and fragment processors) -Programmed using graphics-specific programming models or libraries Recent trends: -Some convergence between commodity and GPUs and their associated parallel programming models See CS6963L1: Course/CUDA Introduction

CUDA (Compute Unified Device Architecture) Data-parallel programming interface to GPU -Data to be operated on is discretized into independent partition of memory -Each thread performs roughly same computation to different partition of data -When appropriate, easy to express and very efficient parallelization Programmer expresses -Thread programs to be launched on GPU, and how to launch -Data placement and movement between host and GPU -Synchronization, memory management, testing, … CUDA is one of first to support heterogeneous architectures (more later in the semester) CUDA environment -Compiler, run-time utilities, libraries, emulation, performance L1: Course/CUDA Introduction CS6963

Today’s Lecture Goal is to enable writing CUDA programs right away -Not efficient ones – need to explain architecture and mapping for that (soon) -Not correct ones – need to discuss how to reason about correctness (also soon) -Limited discussion of why these constructs are used or comparison with other programming models (more as semester progresses) -Limited discussion of how to use CUDA environment (more next week) -No discussion of how to debug. We’ll cover that as best we can during the semester. L1: Course/CUDA Introduction CS6963

What Programmer Expresses in CUDA Computation partitioning (where does computation occur?) -Declarations on functions __host__, __global__, __device__ -Mapping of thread programs to device: compute >>( ) Data partitioning (where does data reside, who may access it and how?) Declarations on data __shared__, __device__, __constant__, … Data management and orchestration Copying to/from host: e.g., cudaMemcpy(h_obj,d_obj, cudaMemcpyDevicetoHost) Concurrency management -E.g. __synchthreads() L1: Course/CUDA Introduction P M P HOST (CPU) M DEVICE (GPU) Interconnect between devices and memories CS6963

Minimal Extensions to C + API Declspecs -global, device, shared, local, constant Keywords -threadIdx, blockIdx Intrinsics -__syncthreads Runtime API -Memory, symbol, execution management Function launch __device__ float filter[N]; __global__ void convolve (float *image) { __shared__ float region[M];... region[threadIdx] = image[i]; __syncthreads()... image[j] = result; } // Allocate GPU memory void *myimage = cudaMalloc(bytes) // 100 blocks, 10 threads per block convolve >> (myimage); © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign L1: Course/CUDA Introduction

NVCC Compiler’s Role: Partition Code and Compile for Device L1: Course/CUDA Introduction mycode.cu __device__ dfunc() { int ddata; } __global__ gfunc() { int gdata; } Main() { } __host__ hfunc () { int hdata; >>(); } Device Only Interface Host Only int main_data; __shared__ int sdata; Main() {} __host__ hfunc () { int hdata; >> (); } __global__ gfunc() { int gdata; } Compiled by native compiler: gcc, icc, cc __shared__ sdata; __device__ dfunc() { int ddata; } Compiled by nvcc compiler int main_data; CS6963

CUDA Programming Model: A Highly Multithreaded Coprocessor The GPU is viewed as a compute device that: -Is a coprocessor to the CPU or host -Has its own DRAM (device memory) -Runs many threads in parallel Data-parallel portions of an application are executed on the device as kernels which run in parallel on many threads Differences between GPU and CPU threads -GPU threads are extremely lightweight -Very little creation overhead -GPU needs 1000s of threads for full efficiency -Multi-core CPU needs only a few CS6963L1: Course/CUDA Introduction

Thread Batching: Grids and Blocks A kernel is executed as a grid of thread blocks -All threads share data memory space A thread block is a batch of threads that can cooperate with each other by: -Synchronizing their execution -For hazard-free shared memory accesses -Efficiently sharing data through a low latency shared memory Two threads from two different blocks cannot cooperate Host Kernel 1 Kernel 2 Device Grid 1 Block (0, 0) Block (1, 0) Block (2, 0) Block (0, 1) Block (1, 1) Block (2, 1) Grid 2 Block (1, 1) Thread (0, 1) Thread (1, 1) Thread (2, 1) Thread (3, 1) Thread (4, 1) Thread (0, 2) Thread (1, 2) Thread (2, 2) Thread (3, 2) Thread (4, 2) Thread (0, 0) Thread (1, 0) Thread (2, 0) Thread (3, 0) Thread (4, 0) Courtesy: NDVIA © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign L1: Course/CUDA Introduction

Block and Thread IDs Threads and blocks have IDs -So each thread can decide what data to work on -Block ID: 1D or 2D (blockIdx.x, blockIdx.y) -Thread ID: 1D, 2D, or 3D (threadIdx.{x,y,z}) Simplifies memory addressing when processing multidimensional data -Image processing -Solving PDEs on volumes -… Device Grid 1 Block (0, 0) Block (1, 0) Block (2, 0) Block (0, 1) Block (1, 1) Block (2, 1) Block (1, 1) Thread (0, 1) Thread (1, 1) Thread (2, 1) Thread (3, 1) Thread (4, 1) Thread (0, 2) Thread (1, 2) Thread (2, 2) Thread (3, 2) Thread (4, 2) Thread (0, 0) Thread (1, 0) Thread (2, 0) Thread (3, 0) Thread (4, 0) Courtesy: NDVIA © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign L1: Course/CUDA Introduction

Simple working code example Goal for this example: -Really simple but illustrative of key concepts -Fits in one file with simple compile command -Can absorb during lecture What does it do? -Scan elements of array of numbers (any of 0 to 9) -How many times does “6” appear? -Array of 16 elements, each thread examines 4 elements, 1 block in grid, 1 grid L1: Course/CUDA Introduction threadIdx.x = 0 examines in_array elements 0, 4, 8, 12 threadIdx.x = 1 examines in_array elements 1, 5, 9, 13 threadIdx.x = 2 examines in_array elements 2, 6, 10, 14 threadIdx.x = 3 examines in_array elements 3, 7, 11, 15 } Known as a cyclic data distribution CS6963

CUDA Pseudo-Code MAIN PROGRAM: Initialization Allocate memory on host for input and output Assign random numbers to input array Call host function Calculate final output from per-thread output Print result L1: Course/CUDA Introduction HOST FUNCTION: Allocate memory on device for copy of input and output Copy input to device Set up grid/block Call global function Copy device output to host GLOBAL FUNCTION: Thread scans subset of array elements Call device function to compare with “6” Compute local result DEVICE FUNCTION: Compare current element and “6” Return 1 if same, else 0 CS6963

Main Program: Preliminaries L1: Course/CUDA Introduction MAIN PROGRAM: Initialization Allocate memory on host for input and output Assign random numbers to input array Call host function Calculate final output from per-thread output Print result #include #define SIZE 16 #define BLOCKSIZE 4 int main(int argc, char **argv) { int *in_array, *out_array; … } CS6963

Main Program: Invoke Global Function L1: Course/CUDA Introduction MAIN PROGRAM: Initialization (OMIT) Allocate memory on host for input and output Assign random numbers to input array Call host function Calculate final output from per-thread output Print result #include #define SIZE 16 #define BLOCKSIZE 4 __host__ void outer_compute(int *in_arr, int *out_arr); int main(int argc, char **argv) { int *in_array, *out_array; /* initialization */ … outer_compute(in_array, out_array); … } CS6963

Main Program: Calculate Output & Print Result MAIN PROGRAM: Initialization (OMIT) Allocate memory on host for input and output Assign random numbers to input array Call host function Calculate final output from per-thread output Print result #include #define SIZE 16 #define BLOCKSIZE 4 __host__ void outer_compute(int *in_arr, int *out_arr); int main(int argc, char **argv) { int *in_array, *out_array; int sum = 0; /* initialization */ … outer_compute(in_array, out_array); for (int i=0; i<BLOCKSIZE; i++) { sum+=out_array[i]; } printf (”Result = %d\n",sum); } CS6963 L1: Course/CUDA Introduction

Host Function: Preliminaries & Allocation L1: Course/CUDA Introduction HOST FUNCTION: Allocate memory on device for copy of input and output Copy input to device Set up grid/block Call global function Copy device output to host __host__ void outer_compute (int *h_in_array, int *h_out_array) { int *d_in_array, *d_out_array; cudaMalloc((void **) &d_in_array, SIZE*sizeof(int)); cudaMalloc((void **) &d_out_array, BLOCKSIZE*sizeof(int)); … } CS6963

Host Function: Copy Data To/From Host L1: Course/CUDA Introduction HOST FUNCTION: Allocate memory on device for copy of input and output Copy input to device Set up grid/block Call global function Copy device output to host __host__ void outer_compute (int *h_in_array, int *h_out_array) { int *d_in_array, *d_out_array; cudaMalloc((void **) &d_in_array, SIZE*sizeof(int)); cudaMalloc((void **) &d_out_array, BLOCKSIZE*sizeof(int)); cudaMemcpy(d_in_array, h_in_array, SIZE*sizeof(int), cudaMemcpyHostToDevice); … do computation... cudaMemcpy(h_out_array,d_out_array, BLOCKSIZE*sizeof(int), cudaMemcpyDeviceToHost); } CS6963

Host Function: Setup & Call Global Function HOST FUNCTION: Allocate memory on device for copy of input and output Copy input to device Set up grid/block Call global function Copy device output to host __host__ void outer_compute (int *h_in_array, int *h_out_array) { int *d_in_array, *d_out_array; cudaMalloc((void **) &d_in_array, SIZE*sizeof(int)); cudaMalloc((void **) &d_out_array, BLOCKSIZE*sizeof(int)); cudaMemcpy(d_in_array, h_in_array, SIZE*sizeof(int), cudaMemcpyHostToDevice); compute >> (d_in_array, d_out_array); cudaMemcpy(h_out_array, d_out_array, BLOCKSIZE*sizeof(int), cudaMemcpyDeviceToHost); } CS6963 L1: Course/CUDA Introduction

Global Function L1: Course/CUDA Introduction GLOBAL FUNCTION: Thread scans subset of array elements Call device function to compare with “6” Compute local result __global__ void compute(int *d_in,int *d_out) { d_out[threadIdx.x] = 0; for (int i=0; i<SIZE/BLOCKSIZE; i++) { int val = d_in[i*BLOCKSIZE + threadIdx.x]; d_out[threadIdx.x] += compare(val, 6); } CS6963

Device Function L1: Course/CUDA Introduction DEVICE FUNCTION: Compare current element and “6” Return 1 if same, else 0 __device__ int compare(int a, int b) { if (a == b) return 1; return 0; } CS6963

Reductions This type of computation is called a parallel reduction -Operation is applied to large data structure -Computed result represents the aggregate solution across the large data structure -Large data structure  computed result (perhaps single number) [dimensionality reduced] Why might parallel reductions be well-suited to GPUs? What if we tried to compute the final sum on the GPUs? L1: Course/CUDA Introduction CS6963

Standard Parallel Construct Sometimes called “embarassingly parallel” or “pleasingly parallel” Each thread is completely independent of the others Final result copied to CPU Another example, adding two matrices: -A more careful examination of decomposing computation into grids and thread blocks L1: Course/CUDA Introduction CS6963

Summary of Lecture Introduction to CUDA Essentially, a few extensions to C + API supporting heterogeneous data-parallel CPU+GPU execution -Computation partitioning -Data partititioning (parts of this implied by decomposition into threads) -Data organization and management -Concurrency management Compiler nvcc takes as input a.cu program and produces -C Code for host processor (CPU), compiled by native C compiler -Code for device processor (GPU), compiled by nvcc compiler Two examples -Parallel reduction -Embarassingly/Pleasingly parallel computation (your assignment) CS6963 L1: Course/CUDA Introduction

Next Week Correctness: Race Conditions & Synchronization (first look) Hardware Execution Model CS6963