Using MPI - the Fundamentals University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

Its.unc.edu 1 Collective Communication University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
CS 240A: Models of parallel programming: Distributed memory and MPI.
SOME BASIC MPI ROUTINES With formal datatypes specified.
1 Parallel Programming with MPI: Day 1 Science & Technology Support High Performance Computing Ohio Supercomputer Center 1224 Kinnear Road Columbus, OH.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
CS 179: GPU Programming Lecture 20: Cross-system communication.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
MPI Short Course Instructor: Mark Reed Dept: Research Computing
Director of Contra Costa College High Performance Computing Center
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Its.unc.edu 1 University of North Carolina - Chapel Hill ITS Research Computing Instructor: Mark Reed Point to Point Communication.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
NORA/Clusters AMANO, Hideharu Textbook pp. 140-147.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Lecture 5 CSS314 Parallel Computing Book: “An Introduction to Parallel Programming” by Peter Pacheco
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
Introduction to MPI Programming Ganesh C.N.
Introduction to parallel computing concepts and technics
Introduction to MPI.
MPI Message Passing Interface
Introduction to MPI CDP.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
Introduction to parallelism and the Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
MPI Message Passing Interface
Presentation transcript:

Using MPI - the Fundamentals University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed

its.unc.edu 2 What is MPI? Technically it is the definition of a standard, portable, message-passing library, the Message Passing Interface. The standard defines both  syntax  semantics Colloquially, it usually refers to a specific MPI implementation of this library, which will include the process and communication control layers in addition to the MPI library.

its.unc.edu 3 a) Configuration Control - the ability to determine the machine’s type, accessibility, file control, etc. b) Message Passing Library - syntax and semantics of calls to move messages around the system c) Process Control - the ability to spawn processes and communicate with these processes MP implementations consist of:

its.unc.edu 4 MPI Forum  First message-passing interface standard.  Sixty people from forty different organizations, began in 1992  Users and vendors represented, from the US and Europe.  Two-year process of proposals, meetings and review.  Message Passing Interface document produced in May 1994, MPI-1

its.unc.edu 6 Goals and Scope of MPI  MPI's prime goals are: To provide source-code portability. To allow efficient implementation.  It also offers: A great deal of functionality. Support for heterogeneous parallel architectures.

its.unc.edu 7 What’s included in MPI?  PTP communication  collective operations  process groups  communication domains  process topologies  environmental management and inquiry  profiling interface  bindings for Fortran 77 and C

its.unc.edu 8 What is not included in MPI?  support for task management  I/O functions  explicit shared memory operations  explicit support for threads  program construction tools  debugging facilities

its.unc.edu 9 Seven function version of MPI:  MPI_Init  MPI_Comm_size  MPI_Comm_rank  MPI_Send  MPI_Recv  MPI_Barrier  MPI_Finalize

its.unc.edu 10 MPI is Little? No, MPI is Big!  There are many additional functions, at least 133 functions at (my :) last count.  These functions add flexibility, robustness, efficiency,modularity, or convenience.  You should definitely check them out! MPI is big and little

its.unc.edu 11 MPI Communicators Every message passing system must have a means of specifying where the message is to go -> MPI uses communicators.

its.unc.edu 12 MPI Communicators  Communicator has two distinguishing characteristics: Group Name - a unique name given to a collection of processes  Rank - unique integer id within the group Context - the context defines the communication domain and is allocated by the system at run time

its.unc.edu 13 MPI Communicators - Rank  Rank - an integer id assigned to the process by the system  ranks are contiguous and start from 0 within each group  Thus group and rank serve to uniquely identify each process

its.unc.edu 14 Syntax: C vs Fortran  Syntax is generally the same, with the exception that C returns the status of the function call while Fortran has an additional integer argument at the end to return this error code  All MPI objects (e.g., MPI_Datatype, MPI_Comm) are of type INTEGER in Fortran.  Disclaimer: Henceforth, we will usually omit the Fortran syntax, as it should be clear what the syntax is from the C call

its.unc.edu 15 int MPI_Init (int *argc, char ***argv)  This must be called once and only once by every MPI program. It should be the first MPI call (exception: MPI_Initialized ). argc and argv are the standard arguments to a C program and can be empty if none are required. argc argument count argvpointers to arguments Fortran syntax: CALL MPI_Init(ierr)

its.unc.edu 16 int MPI_Comm_size (MPI_Comm comm, int *size) Generally the second function called, this returns with the size of the group associated with the MPI communicator, comm. The communicator, MPI_COMM_WORLD is predefined to include all processes. comm communicator size upon return, set to the size of the group

its.unc.edu 17 int MPI_Comm_rank (MPI_Comm comm, int *rank) Generally the third function called, this returns with the rank of the processor within the group comm. comm communicator rank upon return, set to the rank of the process within the group

its.unc.edu 18 What happens on a send?  Ideally, the data buffer is transmitted to the receiver, however, what if the receiver is not ready?  3 Scenarios: Wait for receiver to be ready (blocking) Copy the message to an internal buffer (on sender, receiver, or elsewhere) and then return from the send call (nonblocking) Fail  Which is best?

its.unc.edu 19 Sending a message: 2 scenarios CPU 0 DRAM array DRAM CPU 1 Direct Buffered

its.unc.edu 20 int MPI_Send (void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) This is the basic send call, note it is potentially a blocking send. buf starting address of the send buffer count number of elements datatype MPI datatype of the data, typically it is the language datatype with `` MPI _'' prepended to it, e.g., MPI_INT and MPI_FLOAT in C or MPI_INTEGER and MPI_REAL in Fortran. dest rank of the destination process tag message tag comm communicator

its.unc.edu 21 int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) This is the basic receive call, note it is a blocking receive. buf starting address of the receive buffer count number of elements to receive, it can be less than or equal to the number sent datatype MPI datatype of the data, see MPI_Send source rank of sender, or wildcard ( MPI_ANY_SOURCE ) tag msg. tag of sender, or wildcard ( MPI_ANY_TAG ) comm communicator status structure containing a minimum (implementation dependent) of three entries, specifying the source, tag, and error code of the received message.

its.unc.edu 22 Receive status  In C, the source, tag and error code are given by the structure fields MPI_SOURCE, MPI_TAG, MPI_ERROR  In Fortran, the field MPI_Status is an integer array of size MPI_STATUS_SIZE. The three predefined indices are MPI_SOURCE, MPI_TAG, MPI_ERROR.

its.unc.edu 23 int MPI_Barrier(MPI_Comm comm) Blocks until all members of the group comm have made this call. This synchronizes all processes. comm the name of the communicator

its.unc.edu 24 int MPI_Finalize(void)  Cleans up all MPI states.  Should be called by all processes.  User must ensure all pending communications complete before calling this routine.

its.unc.edu 25 tokenring example Proc 0 Proc 3 Proc 4 Proc 5 Proc 2 Proc 1

its.unc.edu 26 Simple Example: program tokenring implicit none include "mpif.h" c this is a sample test program to pass a token around the ring c each pe will increment the token w/ by it's pe number integer itoken,npts,npes,mype,msgtag,iprev,inext integer i,istat,irecvstat(MPI_STATUS_SIZE),len c initialize mpi, set the npes and mype variables call MPI_Init(istat) if (istat.ne.0) stop "ERROR: can't initialize mpi" call MPI_Comm_size(MPI_COMM_WORLD,npes,istat) call MPI_Comm_rank(MPI_COMM_WORLD,mype,istat) c initialize variables on all pe's itoken = 0 npts = 1 msgtag = 101 iprev = mod(mype+npes-1,npes) inext = mod(mype+npes+1,npes)

its.unc.edu 27 Simple Example Cont. c now send and receive the token if (mype.eq.0) then itoken = itoken + mype call MPI_Send (itoken,npts,MPI_INTEGER,inext,msgtag, & MPI_COMM_WORLD,istat) call MPI_Recv (itoken,npts,MPI_INTEGER,iprev,msgtag, & MPI_COMM_WORLD,irecvstat,istat) else call MPI_Recv (itoken,npts,MPI_INTEGER,iprev,msgtag, & MPI_COMM_WORLD,irecvstat,istat) itoken = itoken + mype call MPI_Send (itoken,npts,MPI_INTEGER,inext,msgtag, & MPI_COMM_WORLD,istat) endif print *, "mype = ",mype," received token = ",itoken c call barrier and exit call mpi_barrier(MPI_COMM_WORLD,istat) call mpi_finalize (istat)

its.unc.edu 28 Sample tokenring output on 6 processors  mype = 5 of 6 procs and has token = 15  mype = 5 my name = baobab-n40.isis.unc.edu  mype = 1 of 6 procs and has token = 1  mype = 1 my name = baobab-n47.isis.unc.edu  mype = 4 of 6 procs and has token = 10  mype = 4 my name = baobab-n40.isis.unc.edu  mype = 2 of 6 procs and has token = 3  mype = 2 my name = baobab-n41.isis.unc.edu  mype = 3 of 6 procs and has token = 6  mype = 3 my name = baobab-n41.isis.unc.edu  mype = 0 of 6 procs and has token = 15  mype = 0 my name = baobab-n47.isis.unc.edu

its.unc.edu 29 Same example in C /* program tokenring */ /* this is a sample test program to pass a token around the ring */; /* each pe will increment the token w/ by it's pe number */; #include void main(int argc, char* argv[]) { int itoken,npts,npes,mype,msgtag,iprev,inext; int len; char name[MPI_MAX_PROCESSOR_NAME]; MPI_Status irecvstat; /* initialize mpi, set the npes and mype variables */; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD,&npes); MPI_Comm_rank(MPI_COMM_WORLD,&mype);

its.unc.edu 30 Tokenring Cont. /* initialize variables on all pe's */; itoken = 0; npts = 1; msgtag = 101; iprev = (mype+npes-1)%npes; inext = (mype+npes+1)%npes; /* now send and receive the token */; if (mype==0) { MPI_Send (&itoken,npts,MPI_INT,inext,msgtag,MPI_COMM_WORLD); MPI_Recv (&itoken,npts,MPI_INT,iprev,msgtag,MPI_COMM_WORLD,&irecvstat); } else { MPI_Recv (&itoken,npts,MPI_INT,iprev,msgtag,MPI_COMM_WORLD,&irecvstat); itoken += mype; MPI_Send (&itoken,npts,MPI_INT,inext,msgtag,MPI_COMM_WORLD); }

its.unc.edu 31 Tokenring Cont. printf ("mype = %d received token = %d \n",mype,itoken); MPI_Get_processor_name(name, &len); printf ("mype = %d my name = %s\n",mype,name); /* barrier and exit */; MPI_Barrier(MPI_COMM_WORLD); MPI_Finalize (); return; }

its.unc.edu 32 Timing  MPI_Wtime returns wall clock (elapsed) time in seconds from some arbitrary initial point  MPI_Wtick returns resolution of MPI_Wtime in seconds both functions return double (C) or double precision (Fortran) values there are no calling arguments

its.unc.edu 33 References  “Using MPI” by Gropp, Lusk, and Skjellum  “MPI: The Complete Reference” by Snir, Otto, Huss-Lederman, Walker, and Dongarra  Edinburgh Parallel Computing Centre by MacDonald, Minty, Antonioletti, Malard, Harding, and Brown  Maui High Performance Computing Center

its.unc.edu 34 References Cont.  “Parallel Programming with MPI” by Peter Pacheco  List of tutorials at: