Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.

Slides:



Advertisements
Similar presentations
Its.unc.edu 1 Collective Communication University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
Advertisements

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
SOME BASIC MPI ROUTINES With formal datatypes specified.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 CS 668: Lecture 2 An Introduction to MPI Fred Annexstein University of Cincinnati CS668: Parallel Computing Fall 2007 CC Some.
Parallel Programming in C with MPI and OpenMP
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Collective Communication.  Collective communication is defined as communication that involves a group of processes  More restrictive than point to point.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Sahalu JunaiduICS 573: High Performance Computing6.1 Programming Using the Message Passing Paradigm Principles of Message-Passing Programming The Building.
Parallel Programming with Java
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Chapter 4 Message-Passing Programming. Learning Objectives Understanding how MPI programs execute Understanding how MPI programs execute Familiarity with.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
1 MPI: Message Passing Interface Prabhaker Mateti Wright State University.
Distributed Processing with MPI International Summer School 2015 Tomsk Polytechnic University Assistant Professor Dr. Sergey Axyonov.
Computer Science Department
Introduction to MPI Programming Ganesh C.N.
Collectives Reduce Scatter Gather Many more.
MPI Basics.
MPI Jakub Yaghob.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Message Passing Interface (cont.) Topologies.
MPI Message Passing Interface
CS 668: Lecture 3 An Introduction to MPI
Introduction to MPI CDP.
Computer Science Department
Send and Receive.
CS 584.
Send and Receive.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
MPI: Message Passing Interface
CSCE569 Parallel Computing
MPI MPI = Message Passing Interface
Introduction to Parallel Computing with MPI
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Computer Science Department
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Message Passing Interface

Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The goal of the Message Passing Interface is to provide a widely used standard for writing message passing programs. The interface attempts to be –practical –portable –efficient –flexible A message-passing library specification is: –message-passing model –not a compiler specification –not a specific product

Message Passing Interface MPI is designed to provide access to advanced parallel hardware for –end users –library writers –tool developers Message Passing is now mature as programming paradigm –well understood –efficient match to hardware –many applications Interface specifications have been defined for C and Fortran programs.

History of MPI April, 1992: –Workshop on Standards for Message Passing in a Distributed Memory Environment, sponsored by the Center for Research on Parallel Computing, Williamsburg, Virginia. –The basic features essential to a standard message passing interface were discussed, and a working group established to continue the standardization process. Preliminary draft proposal developed subsequently. November 1992: –Working group meets in Minneapolis. Oak Ridge National Laboratory (ORNL) presented a MPI draft proposal (MPI1). –Group adopts procedures and organization to form the MPI Forum. MPIF eventually comprised of about 175 individuals from 40 organizations including parallel computer vendors, software writers, academia and application scientists.MPI Forum.

History of MPI November 1993: Supercomputing 93 conference - draft MPI standard presented. Final version of draft released in May, available on the WWW at: MPI-2 picked up where the first MPI specification left off, and addresses topics which go beyond the first MPI specification.

Reasons for Using MPI Standardization - MPI is the only message passing library which can be considered a standard. It is supported on virtually all HPC platforms. Portability - There is no need to modify your source code when you port your application to a different platform that supports (and is compliant with) the MPI standard. Performance Opportunities - Vendor implementations should be able to exploit native hardware features to optimize performance. Functionality - Over 115 routines are defined. Availability - A variety of implementations are available, both vendor and public domain.

Types of Parallel Programming Flynn’s taxonomy (hardware oriented) –SISD : Single Instruction, Single Data –SIMD : Single Instruction, Multiple Data –MISD : Multiple Instruction, Single Data –MIMD : Multiple Instruction, Multiple Data A programmer-oriented taxonomy –Data-parallel : Same operations on different data (SIMD) –Task-parallel : Different programs, different data –MIMD : Different programs, different data –SPMD : Same program, different data –Dataflow : Pipelined parallelism MPI is for SPMD/MIMD.

MPI Programming Writing MPI programs Compiling and linking Running MPI programs More information –Using MPI by William Gropp, Ewing Lusk, and Anthony Skjellum, –Designing and Building Parallel Programs by Ian Foster. –A Tutorial/User's Guide for MPI by Peter Pacheco (ftp://math.usfca.edu/pub/MPI/mpi.guide.ps) –The MPI standard and other information is available at Also the source for several implementations.

Simple MPI C Program (hello.c) #include int main(int argc, char **argv ) { MPI_Init(&argc, &argv); printf("Hello world\n"); MPI_Finalize(); return 0; } Complie hello.c: mpicc –o hello helllo.c #include provides basic MPI definitions and types MPI_Init starts MPI MPI_Finalize exits MPI Note that all non-MPI routines are local; thus the printf run on each process

Simple MPI C++ Program (hello.cc) #include Int main(int argc, char *argv[]) { int rank, size; MPI::Init(argc, argv); cout << "Hello world" << endl; MPI::Finalize(); return 0; } Compile hello.cc: mpiCC –o hello hello.cc

Starting and Exiting MPI Starting MPI int MPI_Init(int *argc, char **argv) void MPI::Init(int& argc, char**& argv) Exiting MPI int MPI_Finalize(void) void MPI::Finalize()

Running MPI Programs On many platforms MPI programs can be started with ‘ mpirun’. mpirun -np hello ‘mpirun’ is not part of the standard, but some version of it is common with several MPI implementations. Two of the first questions asked in a parallel program are as follows: 1.“How many processes are there?” 2.“Who am I?” “How many” is answered with MPI_COMM_SIZE; “Who am I” is answered with MPI_COMM_RANK. The rank is a number between zero and (SIZE -1).

Second MPI C Program #include #include "mpi.h" int main(int argc, char **argv) { int rank, size; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); printf("Hello world! I am %d of %d\n",rank,size); MPI_Finalize(); return 0; }

Second MPI C++ Program #include int main(int argc, char **argv) { MPI::Init(argc, argv); int rank = MPI::COMM_WORLD.Get_rank(); int size = MPI::COMM_WORLD.Get_size(); cout << "Hello world! I am " << rank << " of "<< size << endl; MPI::Finalize(); return 0; }

Second MPI C Program #include #include "mpi.h" int main(int argc, char **argv) { int rank, size; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); printf("Hello world! I am %d of %d\n",rank,size); MPI_Finalize(); return 0; }

MPI_COMM_WORLD Communication in MPI takes place with respect to communicators (more about communicators later). The MPI_COMM_WORLD communicator is created when MPI is started and contains all MPI processes. MPI_COMM_WORLD is a useful default communicator – many applications do not need to use any other. mpirun -np

MPI C Data Type C Data Type MPI_CHAR MPI_SHORT MPI_INT MPI_LONG MPI_UNSIGNED_CHAR MPI_UNSIGNED_SHORT MPI_UNSIGNED_INT MPI_UNSIGNED_LONG MPI_FLOAT MPI_DOUBLE MPI_LONG_DOUBLE MPI_BYTE MPI_PACKED signed char signed short int signed int signed long int unsigned char unsigned short int unsigned int unsigned long int float double long double

MPI C++ Data Types MPI C++ data typeC++ data type MPI::CHAR MPI::SHORT MPI::INT MPI::LONG MPI::UNSIGNEDCHAR MPI::UNSIGNEDSHORT MPI::UNSIGNED MPI::UNSIGNEDLONG MPI::FLOAT MPI::DOUBLE MPI::LONGDOUBLE MPI::BYTE MPI::PACKED signed char signed short int signed int signed long int unsigned char unsigned short int unsigned int unsigned long int float double long double

MPI Basic Functions MPI_Init – set up an MPI program. MPI_Comm_size – get the number of processes participating in the program. MPI_Comm_rank – determine which of those processes corresponds to the one calling the function. MPI_Send – send messages. MPI_Recv – receive messages. MPI_Finalize – stop participating in a parallel program.

MPI Basic Functions MPI_Init(int *argc, char ***argv) –Takes the command line arguments to a program, –checks for any MPI options, and –passes remaining command line arguments to the main program. MPI_Comm_size( MPI_Comm comm, int *size ) –Determines the size of a given MPI Communicator. –A communicator is a set of processes that work together. –For typical programs this is the default MPI_COMM_WORLD, which is the communicator for all processes available to an MPI program.

MPI Basic Functions MPI_Comm_rank( MPI_Comm comm, int *rank ) –Determine the rank of the current process within a communicator. –Typically, if a MPI program is being run on N processes, the communicator would be MPI_COMM_WORLD, and the rank would be an integer from 0 to N-1. MPI_Send( void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm ) –Send the contents of buf, which contains count elements of type datatype to a process of rank dest in the communicator comm, flagged with the message tag. Typically, the communicator is MPI_COMM_WORLD.

MPI Basic Functions MPI_Recv( void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status ) –Read into buf, which contains count elements of type datatype from process source in communicator comm if a message is sent flagged with tag. Also receive information about the transfer into status. MPI_Finalize() –Handles anything that the current MPI protocol will need to do before exiting a program. –Typically should be the final or near final line of a program.

MPI Functions MPI_Bcast – Broadcasts a message from the process with rank "root" to all other processes of the group. –int MPI_Bcast ( void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm) –Input/output Parameters buffer starting address of buffer (choice) count number of entries in buffer (integer) datatype data type of buffer (handle) root rank of broadcast root (integer) comm communicator (handle)

MPI Functions MPI_Reduce – Reduces values on all processes to a single value –int MPI_Reduce ( void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm) –Input/output Parameters sendbuf address of send buffer (choice) count number of elements in send buffer (integer) datatype data type of elements of send buffer (handle) op reduce operation (handle) root rank of root process (integer) comm communicator (handle)

MPI Reduce Operation MPI NameOperation MPI_MAX MPI_MIN MPI_PROD MPI_SUM Maximum Minimum Product Sum MPI_LAND MPI_LOR MPI_LXOR Logical and Logical or Logical exclusive or (xor) MPI_BAND MPI_BOR MPI_BXOR Bitwise and Bitwise or Bitwise xor MPI_MAXLOC MPI_MINLOC Maximum value and location Minimum value and location

MPI Example Programs Master/Worker (greeting.c, master-worker.c, ring.c) – A (master) process waits for the message sent by other (worker) processes. Integration (integrate.c): Evaluates the trapezoidal rule estimate for an integral of F(x). π Calculation (pi.c) – Each process calculates part of π and summarize in the master process. Matrix Multiplication (matrix-multiply.c, matrix.c) – Each worker process calculates some rows of matrix multiplication and sends the result back to the master process.

MPI Example Programs Exponential (exponent.c): Evaluates e for a series of cases. This is essentially the sum of products of (1 - exp() ). Statistics (statistics.c) : Tabulates statistics of a set of test scores. : Tridiagnonal system (tridiagonal.c) : Solves the tridiagonal system Tx = y.