Reference: / MPI Program Structure.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Chapter 7 Process Environment Chien-Chung Shen CIS, UD
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
Reference: Getting Started with MPI.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
SOME BASIC MPI ROUTINES With formal datatypes specified.
1 Parallel Programming with MPI: Day 1 Science & Technology Support High Performance Computing Ohio Supercomputer Center 1224 Kinnear Road Columbus, OH.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Sahalu JunaiduICS 573: High Performance Computing6.1 Programming Using the Message Passing Paradigm Principles of Message-Passing Programming The Building.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 9 October 30, 2002 Nayda G. Santiago.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
Director of Contra Costa College High Performance Computing Center
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
1 What is MPI?  MPI = Message Passing Interface  Specification of message passing libraries for developers and users  Not a library by itself, but specifies.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
CS 420 – Design of Algorithms MPI Data Types Basic Message Passing - sends/receives.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
PP Lab MPI programming II. Program#1 Write a program that prints hello from every created process. Like: Hello World from process 0 of 5 Hello World from.
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
MPI Jakub Yaghob. Literature and references Books Gropp W., Lusk E., Skjellum A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface,
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
PPC SPRING MPI 11 CSCI-4320/6340: Parllel Programming and Computing: West Hall, Tues, Friday 12-1:20 p.m. Message Passing Interface (MPI 1) Prof.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
An Introduction to MPI (message passing interface)
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Implementing Processes and Threads CS550 Operating Systems.
Lecture 5 CSS314 Parallel Computing Book: “An Introduction to Parallel Programming” by Peter Pacheco
Message Passing Interface Using resources from
Introduction to MPI programming Morris Law, SCID May 18/25, 2013.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
Chapter 7 Process Environment Chien-Chung Shen CIS/UD
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Distributed Processing with MPI International Summer School 2015 Tomsk Polytechnic University Assistant Professor Dr. Sergey Axyonov.
Introduction to parallel computing concepts and technics
MPI Basics.
Introduction to MPI.
MPI Message Passing Interface
Introduction to MPI CDP.
CS 584.
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
Introduction to parallelism and the Message Passing Interface
Introduction to Parallel Computing with MPI
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Parallel Processing - MPI
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Reference: / MPI Program Structure

Topics This chapter introduces the basic structure of an MPI program. After sketching this structure using a generic pseudo-code, specific program elements are described in detail for C. These include –Header files –MPI naming conventions –MPI routines and return values –MPI handles –MPI data types –Initializing and terminating MPI –Communicators –Getting communicator information: rank and size

Reference: / Generic MPI Program

A Generic MPI Program All MPI programs have the following general structure: include MPI header file variable declarations initialize the MPI environment...do computation and MPI communication calls... close MPI communications

MPI include file #include void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } variable declarations #include void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } General MPI Program Structure Initialize MPI environment #include void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } Do work and make message passing calls #include void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } Terminate MPI Environment #include void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); }

A Generic MPI Program The MPI header file contains MPI-specific definitions and function prototypes. Then, following the variable declarations, each process calls an MPI routine that initializes the message passing environment. All calls to MPI communication routines must come after this initialization. Finally, before the program ends, each process must call a routine that terminates MPI. No MPI routines may be called after the termination routine is called. Note that if any process does not reach this point during execution, the program will appear to hang.

Reference: / MPI Header Files

MPI header files contain the prototypes for MPI functions/subroutines, as well as definitions of macros, special constants, and data types used by MPI. An appropriate "include" statement must appear in any source file that contains MPI function calls or constants. #include

Reference: / MPI Naming Conventions

The names of all MPI entities (routines, constants, types, etc.) begin with MPI_ to avoid conflicts. C function names have a mixed case: MPI_Xxxxx(parameter,... ) Example: MPI_Init(&argc, &argv). The names of MPI constants are all upper case in both C and Fortran, for example, MPI_COMM_WORLD, MPI_REAL,... In C, specially defined types correspond to many MPI entities. (In Fortran these are all integers.) Type names follow the C function naming convention above; for example, MPI_Comm is the type corresponding to an MPI "communicator".

Reference: / MPI Routines and Return Values

MPI routines are implemented as functions in C. In either case generally an error code is returned, enabling you to test for the successful operation of the routine. In C, MPI functions return an int, which indicates the exit status of the call. int ierr;... ierr = MPI_Init(&argc, &argv);...

MPI Routines and Return Values The error code returned is MPI_SUCCESS if the routine ran successfully (that is, the integer returned is equal to the pre-defined integer constant MPI_SUCCESS). Thus, you can test for successful operation with if (ierr == MPI_SUCCESS) {...routine ran correctly... } If an error occurred, then the integer returned has an implementation-dependent value indicating the specific error.

Reference: / MPI Handles

MPI defines and maintains its own internal data structures related to communication, etc. You reference these data structures through handles. Handles are returned by various MPI calls and may be used as arguments in other MPI calls. In C, handles are pointers to specially defined datatypes (created via the C typedef mechanism). Arrays are indexed starting at 0. Examples: –MPI_SUCCESS - An integer. Used to test error codes. –MPI_COMM_WORLD - In C, an object of type MPI_Comm (a "communicator"); it represents a pre-defined communicator consisting of all processors. Handles may be copied using the standard assignment operation.

Reference: / MPI Datatypes

MPI provides its own reference data types corresponding to the various elementary data types in C. MPI allows automatic translation between representations in a heterogeneous environment. As a general rule, the MPI datatype given in a receive must match the MPI datatype specified in the send. In addition, MPI allows you to define arbitrary data types built from the basic types.

Reference: / Basic MPI Datatypes - C

Basic MPI Data Types MPI DatatypeC Type MPI_CHARsigned char MPI_SHORTsigned short int MPI_INTsigned int MPI_LONGsigned long int MPI_UNSIGNED_CHARunsigned char MPI_UNSIGNED_SHORTunsigned short int

Basic MPI Data Types MPI DatatypeC Type MPI_UNSIGNEDunsigned int MPI_UNSIGNED_LONGunsigned long int MPI_FLOATfloat MPI_DOUBLEdouble MPI_LONG_DOUBLElong double MPI_BYTE(none) MPI_PACKED(none)

Reference: / Special MPI Datatypes (C)

In C, MPI provides several special datatypes (structures). Examples include –MPI_Comm - a communicator –MPI_Status - a structure containing several pieces of status information for MPI calls –MPI_Datatype These are used in variable declarations, for example, MPI_Comm some_comm; declares a variable called some_comm, which is of type MPI_Comm (i.e. a communicator).

Reference: / Initializing MPI

The first MPI routine called in any MPI program must be the initialization routine MPI_INIT. This routine establishes the MPI environment, returning an error code if there is a problem. int ierr;... ierr = MPI_Init(&argc, &argv); Note that the arguments to MPI_Init are the addresses of argc and argv, the variables that contain the command-line arguments for the program.

Reference: / Communicators

Communicators The communicator name is required as an argument to all point-to-point and collective operations. –The communicator specified in the send and receive calls must agree for communication to take place. –Processors can communicate only if they share a communicator. A communicator is a handle representing a group of processors that can communicate with one another Communicator source dest

Communicators There can be many communicators, and a given processor can be a member of a number of different communicators. Within each communicator, processors are numbered consecutively (starting at 0). This identifying number is known as the rank of the processor in that communicator. –The rank is also used to specify the source and destination in send and receive calls. –If a processor belongs to more than one communicator, its rank in each can (and usually will) be different!

Communicators MPI automatically provides a basic communicator called MPI_COMM_WORLD. It is the communicator consisting of all processors. Using MPI_COMM_WORLD, every processor can communicate with every other processor. You can define additional communicators consisting of subsets of the available processors MPI_COMM_WORLD 6 Comm1 Comm Communicator: MPI_COMM_WORLD Comm1 Comm2

Getting Communicator Information: Rank A processor can determine its rank in a communicator with a call to MPI_COMM_RANK. –Remember: ranks are consecutive and start with 0. –A given processor may have different ranks in the various communicators to which it belongs. int MPI_Comm_rank(MPI_Comm comm, int *rank); –The argument comm is a variable of type MPI_Comm, a communicator. For example, you could use MPI_COMM_WORLD here. Alternatively, you could pass the name of another communicator you have defined elsewhere. Such a variable would be declared as MPI_Comm some_comm; –Note that the second argument is the address of the integer variable rank.

Getting Communicator Information: Size A processor can also determine the size, or number of processors, of any communicator to which it belongs with a call to MPI_COMM_SIZE. int MPI_Comm_size(MPI_Comm comm, int *size); –The argument comm is of type MPI_Comm, a communicator. –Note that the second argument is the address of the integer variable size. MPI_Comm_size(MPI_COMM_WORLD, &size);  size =7 MPI_Comm_size(Comm1, &size1);  size1=4 MPI_Comm_size(Comm2, &size2);  size2=3

Reference: / Terminating MPI

The last MPI routine called should be MPI_FINALIZE which –cleans up all MPI data structures, cancels operations that never completed, etc. –must be called by all processes; if any one process does not reach this statement, the program will appear to hang. Once MPI_FINALIZE has been called, no other MPI routines (including MPI_INIT) may be called. int err;... err = MPI_Finalize();

Reference: / Hello World!

Sample Program: Hello World! In this modified version of the "Hello World" program, each processor prints its rank as well as the total number of processors in the communicator MPI_COMM_WORLD. Notes: –Makes use of the pre-defined communicator MPI_COMM_WORLD. –Not testing for error status of routines!

Sample Program: Hello World! #include void main (int argc, char *argv[]) { int myrank, size; /* Initialize MPI */ MPI_Init(&argc, &argv); /* Get my rank */ MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /* Get the total number of processors */ MPI_Comm_size(MPI_COMM_WORLD, &size); printf("Processor %d of %d: Hello World!\n", myrank, size); MPI_Finalize(); /* Terminate MPI */ }

Reference: / Sample Program Output

Sample Program: Output Running this code on four processors will produce a result like: Processor 2 of 4: Hello World! Processor 1 of 4: Hello World! Processor 3 of 4: Hello World! Processor 0 of 4: Hello World! Each processor executes the same code, including probing for its rank and size and printing the string. The order of the printed lines is essentially random! –There is no intrinsic synchronization of operations on different processors. –Each time the code is run, the order of the output lines may change.

Reference: / END