CS6235 L15: Design Review, Midterm Review and 6- Function MPI.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
CS 140: Models of parallel programming: Distributed memory and MPI.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
Tutorial on MPI Experimental Environment for ECE5610/CSC
High Performance Computing
L15: Review for Midterm. Administrative Project proposals due today at 5PM (hard deadline) – handin cs6963 prop March 31, MIDTERM in class L15: Review.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
L13: Review for Midterm. Administrative Project proposals due Friday at 5PM (hard deadline) No makeup class Friday! March 23, Guest Lecture Austin Robison,
CS 240A: Models of parallel programming: Distributed memory and MPI.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
L12: Sparse Linear Algebra on GPUs CS6963. Administrative Issues Next assignment, triangular solve – Due 5PM, Monday, March 8 – handin cs6963 lab 3 ”
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
CS6963 L14: Dynamic Scheduling. L14: Dynamic Task Queues 2 CS6963 Administrative STRSM due March 17 (EXTENDED) Midterm coming -In class April 4, open.
CS6235 L17: Design Review and 6-Function MPI. L17: DRs and MPI 2 CS6235 Administrative Organick Lecture: TONIGHT -David Shaw, “Watching Proteins Dance:
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 3 Distributed Memory Programming with MPI An Introduction to Parallel Programming Peter Pacheco.
Lecture 5 MPI Introduction Topics …Readings January 19, 2012 CSCE 713 Advanced Computer Architecture.
CS6963 L15: Design Review and CUBLAS Paper Discussion.
CS6235 Review for Midterm. 2 CS6235 Administrative Pascal will meet the class on Wednesday -I will join at the beginning for questions on test Midterm.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
L17: Introduction to “Irregular” Algorithms and MPI, cont. November 8, 2011.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
CS6235 L17: Generalizing CUDA: Concurrent Dynamic Execution, and Unified Address Space.
Parallel Programming with MPI By, Santosh K Jena..
CS6963 L18: Global Synchronization and Sorting. L18: Synchronization and Sorting 2 CS6963 Administrative Grading -Should have exams. Nice job! -Design.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
MPI and OpenMP.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 3 Distributed Memory Programming with MPI An Introduction to Parallel Programming Peter Pacheco.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Lecture 5 CSS314 Parallel Computing Book: “An Introduction to Parallel Programming” by Peter Pacheco
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
Introduction to parallel computing concepts and technics
MPI Basics.
Introduction to MPI.
MPI Message Passing Interface
CS4961 Parallel Programming Lecture 18: Introduction to Message Passing Mary Hall November 2, /02/2010 CS4961.
CS 584.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
MPI-Message Passing Interface
Lecture 14: Inter-process Communication
Introduction to parallelism and the Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Review for Midterm.
Distributed Memory Programming with Message-Passing
CS 584 Lecture 8 Assignment?.
Presentation transcript:

CS6235 L15: Design Review, Midterm Review and 6- Function MPI

L15: DRs and Review 2 CS6235 Administrative Design Review April 1 Midterm April 3, in class Organick Lectures this week: -Peter Neumann, SRI International -Known for his work on Multics in the 1960s -"A Personal History of Layered Trustworthiness” Tue Mar 7:00 PM, 220 Skaggs Biology -"Clean-Slate Formally Motivated Hardware and Software for HighlyTrustworthy Systems” Wed Mar 3:20 PM -Roundtable, Wed Mar 1:30. WEB 1248

L15: DRs and Review 3 CS6235 Design Reviews Goal is to see a solid plan for each project and make sure projects are on track -Plan to evolve project so that results guaranteed -Show at least one thing is working -How work is being divided among team members Major suggestions from proposals -Project complexity – break it down into smaller chunks with evolutionary strategy -Add references – what has been done before? Known algorithm? GPU implementation?

L15: DRs and Review 4 CS6235 Design Reviews Oral, 10-minute Q&A session (April 1 in class, plus office hours if needed) -Each team member presents one part -Team should identify “lead” to present plan Three major parts: I.Overview - Define computation and high-level mapping to GPU II.Project Plan -The pieces and who is doing what. -What is done so far? (Make sure something is working by the design review) III.Related Work -Prior sequential or parallel algorithms/implementations -Prior GPU implementations (or similar computations) Submit slides and written document revising proposal that covers these and cleans up anything missing from proposal.

L15: DRs and Review 5 CS6235 Design Review TitleTeam Overview Project Plan Related Work Implementation Status Visual interest Oral Presentation

L15: DRs and Review 6 CS6235 Final Project Presentation Dry run on April 22 -Easels, tape and poster board provided -Tape a set of Powerpoint slides to a standard 2’x3’ poster, or bring your own poster. Poster session during class on April 24 -Invite your friends, profs who helped you, etc. Final Report on Projects due May 1 -Submit code -And written document, roughly 10 pages, based on earlier submission. -In addition to original proposal, include -Project Plan and How Decomposed (from DR) -Description of CUDA implementation -Performance Measurement -Related Work (from DR)

L15: DRs and Review 7 CS6235 Let’s Talk about Demos For some of you, with very visual projects, I encourage you to think about demos for the poster session This is not a requirement, just something that would enhance the poster session Realistic? -I know everyone’s laptops are slow … -… and don’t have enough memory to solve very large problems Creative Suggestions? -Movies captured from run on larger system

L15: DRs and Review 8 CS6235 Message Passing and MPI Message passing is the principle alternative to shared memory parallel programming, predominant programming model for supercomputers and clusters -Portable -Low-level, but universal and matches earlier hardware execution model What it is -A library used within conventional sequential languagess (Fortran, C, C++) -Based on Single Program, Multiple Data (SPMD) -Isolation of separate address spaces + no data races, but communication errors possible + exposes execution model and forces programmer to think about locality, both good for performance -Complexity and code growth! Like OpenMP, MPI arose as a standard to replace a large number of proprietary message passing libraries.

L15: DRs and Review 9 CS6235 Message Passing Library Features All communication, synchronization require subroutine calls -No shared variables -Program runs on a single processor just like any uniprocessor program, except for calls to message passing library Subroutines for -Communication -Pairwise or point-to-point: A message is sent from a specific sending process (point a) to a specific receiving process (point b). -Collectives involving multiple processors –Move data: Broadcast, Scatter/gather –Compute and move: Reduce, AllReduce -Synchronization -Barrier -No locks because there are no shared variables to protect -Queries -How many processes? Which one am I? Any messages waiting?

L15: DRs and Review 10 CS6235 MPI References The Standard itself: -at -All MPI official releases, in both postscript and HTML Other information on Web: -at -pointers to lots of stuff, including other talks and tutorials, a FAQ, other MPI pages Slide source: Bill Gropp

L15: DRs and Review 11 CS6235 Compilation Copyright © 2010, Elsevier Inc. All rights Reserved mpicc -g -Wall -o mpi_hello mpi_hello.c wrapper script to compile turns on all warnings source file create this executable file name (as opposed to default a.out) produce debugging information

L15: DRs and Review 12 CS6235 Execution mpiexec -n mpiexec -n 1./mpi_hello mpiexec -n 4./mpi_hello run with 1 process run with 4 processes Copyright © 2010, Elsevier Inc. All rights Reserved

L15: DRs and Review 13 CS6235 Hello (C) #include "mpi.h" #include int main( int argc, char *argv[] ) { int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); printf( ”Greetings from process %d of %d\n", rank, size ); MPI_Finalize(); return 0; } Slide source: Bill Gropp 11/03/ CS4961

L15: DRs and Review 14 CS6235 Hello (C++) #include "mpi.h" #include int main( int argc, char *argv[] ) { int rank, size; MPI::Init(argc, argv); rank = MPI::COMM_WORLD.Get_rank(); size = MPI::COMM_WORLD.Get_size(); std::cout << ”Greetings from process " << rank << " of " << size << "\n"; MPI::Finalize(); return 0; } Slide source: Bill Gropp, 11/03/ CS4961

L15: DRs and Review 15 CS6235 Execution mpiexec -n 1./mpi_hello mpiexec -n 4./mpi_hello Greetings from process 0 of 1 ! Greetings from process 0 of 4 ! Greetings from process 1 of 4 ! Greetings from process 2 of 4 ! Greetings from process 3 of 4 ! Copyright © 2010, Elsevier Inc. All rights Reserved

L15: DRs and Review 16 CS6235 MPI Components MPI_Init -Tells MPI to do all the necessary setup. MPI_Finalize -Tells MPI we’re done, so clean up anything allocated for this program. Copyright © 2010, Elsevier Inc. All rights Reserved

L15: DRs and Review 17 CS6235 Basic Outline Copyright © 2010, Elsevier Inc. All rights Reserved

L15: DRs and Review 18 CS6235 MPI Basic (Blocking) Send MPI_SEND(start, count, datatype, dest, tag, comm) The message buffer is described by ( start, count, datatype ). The target process is specified by dest, which is the rank of the target process in the communicator specified by comm. When this function returns, the data has been delivered to the system and the buffer can be reused. The message may not have been received by the target process. Slide source: Bill Gropp A(10) B(20) MPI_Send( A, 10, MPI_DOUBLE, 1, …) MPI_Recv( B, 20, MPI_DOUBLE, 0, … )

L15: DRs and Review 19 CS6235 MPI Basic (Blocking) Receive MPI_RECV(start, count, datatype, source, tag, comm, status) Waits until a matching (both source and tag ) message is received from the system, and the buffer can be used source is rank in communicator specified by comm, or MPI_ANY_SOURCE tag is a tag to be matched on or MPI_ANY_TAG receiving fewer than count occurrences of datatype is OK, but receiving more is an error status contains further information (e.g. size of message) Slide source: Bill Gropp A(10) B(20) MPI_Send( A, 10, MPI_DOUBLE, 1, …)MPI_Recv( B, 20, MPI_DOUBLE, 0, … )

L15: DRs and Review 20 CS6235 MPI Datatypes The data in a message to send or receive is described by a triple (address, count, datatype), where An MPI datatype is recursively defined as: -predefined, corresponding to a data type from the language (e.g., MPI_INT, MPI_DOUBLE) -a contiguous array of MPI datatypes -a strided block of datatypes -an indexed array of blocks of datatypes -an arbitrary structure of datatypes There are MPI functions to construct custom datatypes, in particular ones for subarrays Slide source: Bill Gropp

L15: DRs and Review 21 CS6235 A Simple MPI Program #include “mpi.h” #include int main( int argc, char *argv[]) { int rank, buf; MPI_Status status; MPI_Init(&argv, &argc); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); /* Process 0 sends and Process 1 receives */ if (rank == 0) { buf = ; MPI_Send( &buf, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); } else if (rank == 1) { MPI_Recv( &buf, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status ); printf( “Received %d\n”, buf ); } MPI_Finalize(); return 0; } Slide source: Bill Gropp

L15: DRs and Review 22 CS6235 Six-Function MPI Most commonly used constructs A decade or more ago, almost all supercomputer programs only used these -MPI_Init -MPI_Finalize -MPI_Comm_Size -MPI_Comm_Rank -MPI_Send -MPI_Recv Also very useful MPI_Reduce and other collectives Other features of MPI -Task parallel constructs -Optimized communication: non-blocking, one-sided

L15: DRs and Review 23 CS6235 MPI_Reduce Copyright © 2010, Elsevier Inc. All rights Reserved

L15: DRs and Review 24 CS6235 Questions from Previous Exams Short answer questions from last year: Describe one mechanism we discussed for eliminating shared memory bank conflicts in code that exhibits these bank conflicts. Since the occurrence of bank conflicts depends on the data access patterns, please explain any assumptions you are making about the original code with bank conflicts. Give one example of a synchronization mechanism that is control-based, meaning it controls thread execution, and one that is memory-based, meaning that it protects race conditions on memory locations. What happens if two blocks assigned to the same streaming multiprocessor each use more than half of either registers or shared memory? How does this affect scheduling of warps? By comparison, what if the total register and shared memory usage fits within the capacity of these resources?

L15: DRs and Review 25 CS6235 Questions from Previous Exams Short answer questions from 2011: Describe how you can exploit spatial reuse in optimizing for memory bandwidth on a GPU. (Partial credit: what are the memory bandwidth optimizations we studied?) Given examples we have seen of control flow in GPU kernels, describe ONE way to reduce divergent branches for ONE of the following: consider tree- structured reductions, even-odd computations, or boundary conditions. What happens if two threads assigned to different blocks write to the same memory location in global memory?

L15: DRs and Review 26 CS6235 Questions from Previous Exams Short answer questions from 2009 and 2010: Select ONE of the following aspects of the NVIDIA architecture warp execution and describe briefly (in a couple sentences) how it works: -selecting a warp to be scheduled for execution; -executing a branch operation in a warp; -scheduling global memory accesses for a warp; -allocating registers to a thread. List a specific constraint on either parallelism (threads, blocks, dimensionality of each) or memory capacity (for one specific part of the GPU memory hierarchy), and in one sentence, describe how this impacts your GPU program as compared to a sequential CPU program.

L15: DRs and Review 27 CS6235 Problem Solving: Data placement in the memory hierarchy Given the following CUDA code, for each data structure in the thread program, what is the most appropriate portion of the memory hierarchy to place that data (constant, global, shared, or registers) and why. Feel free to give multiple answers for some data in cases where there are multiple possibilities that are all appropriate. In each case, explain how you would modify the CUDA code to use that level of the memory hierarchy. #define N 512 float a[N], b[N], c[N][N], d[N]; int e[N]; __global compute(float *a, float *b, float *c, float *d, int *e) { float temp; int index = blockIdx.x*blockDim.x + threadIdx.x for (j =0; j<N; j++) { temp = (c[index][j] + b[j])*d[e[index]]; a[index] = a[index] – temp; } Questions from Previous Exams

L15: DRs and Review 28 CS6235 Given the following CUDA code, add synchronization to derive a correct implementation that has no race conditions. (Hint: You should be able to simply insert __synchthreads() calls.) __global__ compute (float *a, float *b, int BLOCKSIZE) { __shared__ s_a[128], s_b[128]; /* copy portion of input data into shared memory */ s_a[threadIdx.x] = a[blockIdx.x*BLOCKSIZE + threadIdx.x]; /* Time step loop */ for (int t = 0; t<MAX_TIME; t++) { int boundary = min((blockIdx.x+1)*BLOCKSIZE-1, blockDim.x*BLOCKSIZE-1,threadIdx+2); /* alternate inputs and outputs on even/odd time steps */ if (t % 2 == 0) { s_b[threadIdx.x] = s_a[threadIdx.x] + s_a[boundary]; } else /* (t%2 == 1) */ { s_a[threadIdx.x] = s_b[threadIdx.x] + s_b[boundary]; } /* Result is in s_b, and must be copied to b */ b[blockIdx.x*BLOCKSIZE + threadIdx.x] = s_b[threadIdx.x]; } Questions from Previous Exams

L15: DRs and Review 29 CS6235 Without writing out the CUDA code, consider a CUDA mapping of the LU Decomposition sequential code below. Answer should be in three parts, providing opportunities for partial credit: (i) where are the data dependences in this computation? (ii) how would you partition the computation across threads and blocks? (iii) how would you add synchronization to avoid race conditions? float a[1024][1024]; for (k=0; j<1023; k++) { for (i=k+1; i<1024; i++) a[i][k] = a[i][k] / a[k][k]; for (i=k+1; i<1024; i++) for (j=k+1; j<1024; j++) a[i][j] = a[i][j] – a[i][k]*a[k][j]; }

L15: DRs and Review 30 CS6235 Given a sparse matrix-vector multiplication, consider two different GPU implementations: (a) one in which the sparse matrix is stored in compressed sparse row format (see below code); and, (b) one which uses an ELL format for the sparse matrix because the upper limit on number of nonzeros per row is fixed. For (a), describe the mapping of this code to GPUs. How is it decomposed into threads, and where are t, data, indices and x placed in the memory hierarchy? For (b), how does the ELL representation affect your solution? Would you consider a different thread decomposition and data placement? / / Sequential sparse matrix vector multiplication using compressed sparse row // CSR) representation of sparse matrix. “data” holds the nonzeros in the sparse // matrix for (j=0; j<nr; j++) { for (k = ptr[j]; k<ptr[j+1]-1; k++) t[j] = t[j] + data[k] * x[indices[k]];

L15: DRs and Review 31 CS6235 Example essay questions: Describe the features of computations that are likely to obtain high speedup on a GPU as compared to a sequential CPU. Consider the architecture of the current GPUs and impact on programmability. If you could change one aspect of the architecture to simplify programming, what would it be and why? (You don’t have to propose an alternative architecture.) We talked about sparse matrix computations with respect to linear algebra, graph coloring and program analysis. Describe a sparse matrix representation that is appropriate for a GPU implementation of one of these applications and explain why it is well suited. Describe three optimizations that were performed for the MRI application case study. Questions from Previous Exams