Download presentation
Presentation is loading. Please wait.
Published byDominic Hudson Modified over 9 years ago
1
An Introduction to MPI (message passing interface)
2
Organization In general, grid apps can be organized as: In general, grid apps can be organized as: Peer-to-peer Peer-to-peer Manager-worker (one manager-many workers) Manager-worker (one manager-many workers) We will focus on master-worker. We will focus on master-worker.
3
Concepts MPI size = # of processes in grid app MPI size = # of processes in grid app MPI rank MPI rank Individual process number in executing grid app Individual process number in executing grid app 0..size-1 0..size-1 In manager-worker framework, In manager-worker framework, let manager rank = 0 let manager rank = 0 and workers ranks be 1..size-1 and workers ranks be 1..size-1 Each individual process can determine its rank. Each individual process can determine its rank.
4
More concepts Blocking vs. nonblocking Blocking vs. nonblocking Blocking = calling process waits (blocks) until this operation completes Blocking = calling process waits (blocks) until this operation completes Nonblock = calling process does not wait (block). The calling process initiates the operation but does not wait for completion. Nonblock = calling process does not wait (block). The calling process initiates the operation but does not wait for completion.
5
Compiling MPI grid apps (on scott) Don’t use g++ directly! Don’t use g++ directly! Use: ~ggrevera/lammpi/bin/mpic++ Use: ~ggrevera/lammpi/bin/mpic++Ex. mpic++ -g -o mpiExample2.exe mpiExample2.cpp mpic++ -O3 -o mpiExample2.exe mpiExample2.cpp
6
Starting, running, and stopping grid apps Before we can run our grid apps, we must first start lam mpi. Enter the command: Before we can run our grid apps, we must first start lam mpi. Enter the command: lamboot -v lamboot -v An optional lamhosts file may be specified to indicate the host computers (along with CPU configurations) that participate in the grid. An optional lamhosts file may be specified to indicate the host computers (along with CPU configurations) that participate in the grid. To run our grid app (called mpiExample1.exe), use: To run our grid app (called mpiExample1.exe), use: mpirun -np 4./mpiExample1.exe mpirun -np 4./mpiExample1.exe This creates and runs a 4 process grid app. This creates and runs a 4 process grid app. When you are finished, stop lam mpi via: When you are finished, stop lam mpi via: lamhalt lamhalt
7
Getting started #include //do this once for mpi definitions int MPI_Init ( int *pargc, char ***pargv ); INPUT PARAMETERS pargc - Pointer to the number of arguments pargv - Pointer to the argument vector
8
Finish up int MPI_Finalize ( void );
9
Other useful MPI functions int MPI_Comm_rank ( MPI_Comm comm, int *rank ); int *rank ); INPUT PARAMETERS comm - communicator (handle) comm - communicator (handle) OUTPUT PARAMETER rank - rank of the calling process in group of comm (integer) rank - rank of the calling process in group of comm (integer)
10
Other useful MPI functions int MPI_Comm_size ( MPI_Comm comm, int *psize ); int *psize ); INPUT PARAMETER comm - communicator (handle - must be intracommunicator) comm - communicator (handle - must be intracommunicator) OUTPUT PARAMETER psize - number of processes in the group of comm (integer) psize - number of processes in the group of comm (integer)
11
Other useful non MPI functions #include #include int gethostname ( char *name, size_t len );
12
Other useful non MPI functions #include #include pid_t getpid ( void );
13
Example 1 This program is a skeleton of a parallel MPI application using the one manager/many workers framework. This program is a skeleton of a parallel MPI application using the one manager/many workers framework. http://www.sju.edu/~ggrevera/software/csc 4035/mpiExample1.cpp http://www.sju.edu/~ggrevera/software/csc 4035/mpiExample1.cpp
14
/** \file mpiExample1.cpp \brief MPI programming example #1. \author george j. grevera, ph.d. This program is a skeleton of a parallel MPI application using the one manager/many workers framework. <pre> compile: mpic++ -g -o mpiExample1.exe mpiExample1.cpp # debug version mpic++ -O3 -o mpiExample1.exe mpiExample1.cpp # optimized version mpic++ -O3 -o mpiExample1.exe mpiExample1.cpp # optimized version run : lamboot -v # to start lam mpi mpirun -np 4./mpiExample1.exe # run in parallel w/ 4 processes mpirun -np 4./mpiExample1.exe # run in parallel w/ 4 processes lamhalt @ to stop lam mpi lamhalt @ to stop lam mpi</pre>*/ #include #include Example 1
15
static char mpiName[ 1024 ];///< host computer name static intmpiRank;///< number of this process (0..n-1) static intmpiSize;///< total number of processes (n) static intmyPID;///< process id //---------------------------------------------------------------------- Example 1
16
//---------------------------------------------------------------------- /** \brief main program entry point for example 1. execution begins here. \param argc count of command line arguments. \param argc count of command line arguments. \param argv array of command line arguments. \param argv array of command line arguments. \returns 0 is always returned. \returns 0 is always returned. */ */ int main ( int argc, char* argv[] ) { //not const because MPI_Init may change if (MPI_Init( &argc, &argv ) != MPI_SUCCESS) { if (MPI_Init( &argc, &argv ) != MPI_SUCCESS) { //actually, we'll never get here but it is a good idea to check. //actually, we'll never get here but it is a good idea to check. // if MPI_Init fails, mpi will exit with an error message. // if MPI_Init fails, mpi will exit with an error message. puts( "mpi init failed." ); puts( "mpi init failed." ); return 0; return 0; } //get the name of this computer //get the name of this computer gethostname( mpiName, sizeof( mpiName ) ); gethostname( mpiName, sizeof( mpiName ) ); //determine rank //determine rank MPI_Comm_rank( MPI_COMM_WORLD, &mpiRank ); MPI_Comm_rank( MPI_COMM_WORLD, &mpiRank ); //determine the total number of processes //determine the total number of processes MPI_Comm_size( MPI_COMM_WORLD, &mpiSize ); MPI_Comm_size( MPI_COMM_WORLD, &mpiSize ); //get the process id //get the process id myPID = getpid(); myPID = getpid(); Example 1
17
printf( "mpi initialized. my rank=%d, size=%d, pid=%d. \n", printf( "mpi initialized. my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID ); mpiRank, mpiSize, myPID ); if (mpiSize<2) { if (mpiSize<2) { puts("this example requires at least 1 manager and 1 worker process."); puts("this example requires at least 1 manager and 1 worker process."); MPI_Finalize(); MPI_Finalize(); return 0; return 0; } if (mpiRank==0)manager(); if (mpiRank==0)manager(); elseworker(); elseworker(); MPI_Finalize(); MPI_Finalize(); return 0; return 0;}//---------------------------------------------------------------------- Example 1
18
//---------------------------------------------------------------------- /** \brief manager code for example 1 */ */ static void manager ( void ) { printf( "manager: my rank=%d, size=%d, pid=%d. \n", printf( "manager: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID ); mpiRank, mpiSize, myPID ); /** \todo insert manager code here. */ /** \todo insert manager code here. */}//----------------------------------------------------------------------
19
Example 1 //---------------------------------------------------------------------- /** \brief worker code for example 1 */ */ static void worker ( void ) { printf( "worker: my rank=%d, size=%d, pid=%d. \n", printf( "worker: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID ); mpiRank, mpiSize, myPID ); /** \todo insert worker code here. */ /** \todo insert worker code here. */}//----------------------------------------------------------------------
20
More useful MPI functions int MPI_Send ( void *buf, int count, MPI_Datatype dtype, int dest, int tag, MPI_Comm comm ); int dest, int tag, MPI_Comm comm ); INPUT PARAMETERS buf - initial address of send buffer (choice) buf - initial address of send buffer (choice) count - number of elements in send buffer count - number of elements in send buffer (nonnegative integer) dtyp - datatype of each send buffer element (handle) dtyp - datatype of each send buffer element (handle) dest - rank of destination (integer) dest - rank of destination (integer) tag - message tag (integer) tag - message tag (integer) comm - communicator (handle) comm - communicator (handle)
21
More useful MPI functions int MPI_Recv ( void *buf, int count, MPI_Datatype dtype, int src, int tag, MPI_Comm comm, MPI_Status *stat ); int src, int tag, MPI_Comm comm, MPI_Status *stat ); INPUT PARAMETERS count - maximum number of elements in receive buffer (integer) count - maximum number of elements in receive buffer (integer) dtype - datatype of each receive buffer element (handle) dtype - datatype of each receive buffer element (handle) src - rank of source (integer) src - rank of source (integer) tag - message tag (integer) tag - message tag (integer) comm - communicator (handle) comm - communicator (handle) OUTPUT PARAMETERS buf - initial address of receive buffer (choice) buf - initial address of receive buffer (choice) stat - status object (Status), which can be the MPI constant stat - status object (Status), which can be the MPI constant MPI_STATUS_IGNORE if the return status is not desired
22
Defining messages struct Message { enum { enum { OP_WORK, ///< manager to worker - here's your work assignment OP_WORK, ///< manager to worker - here's your work assignment OP_EXIT, ///< manager to worker - time to exit OP_EXIT, ///< manager to worker - time to exit OP_RESULT ///< worker to manager - here's the result OP_RESULT ///< worker to manager - here's the result }; }; int operation; ///< one of the above int operation; ///< one of the above /** \todo define operation specific parameters here. */ /** \todo define operation specific parameters here. */}; C enums assign successive integers to the given constants/symbols. C structs are like Java or C++ objects with only the data members and without the methods/functions.
23
Example 2 This program is a skeleton of a parallel MPI application using the one manager/many workers framework. The process with an MPI rank of 0 is considered to be the manager; processes with MPI ranks of 1..mpiSize-1 are workers. Messages are defined and are sent from the manager to the workers. This program is a skeleton of a parallel MPI application using the one manager/many workers framework. The process with an MPI rank of 0 is considered to be the manager; processes with MPI ranks of 1..mpiSize-1 are workers. Messages are defined and are sent from the manager to the workers. http://www.sju.edu/~ggrevera/software/csc4035/ mpiExample2.cpp http://www.sju.edu/~ggrevera/software/csc4035/ mpiExample2.cpp
24
Example 2 //---------------------------------------------------------------------- /** \brief manager code for example 2. */ */ static void manager ( void ) { printf( "manager: my rank=%d, size=%d, pid=%d. \n", printf( "manager: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID ); mpiRank, mpiSize, myPID ); /** \todo insert manager code here. */ /** \todo insert manager code here. */ //as an example, send an empty work message to each worker //as an example, send an empty work message to each worker struct Message m; struct Message m; m.operation = m.OP_WORK; m.operation = m.OP_WORK; assert( mpiSize>3 ); assert( mpiSize>3 ); MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 1, MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 1, m.operation, MPI_COMM_WORLD ); MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 2, MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 2, m.operation, MPI_COMM_WORLD ); MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 3, MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 3, m.operation, MPI_COMM_WORLD ); }//----------------------------------------------------------------------
25
Example 2 //---------------------------------------------------------------------- /** \brief worker code for example 2. */ */ static void worker ( void ) { printf( "worker: my rank=%d, size=%d, pid=%d. \n", printf( "worker: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID ); mpiRank, mpiSize, myPID ); /** \todo insert worker code here. */ /** \todo insert worker code here. */ //as an example, receive a message //as an example, receive a message MPI_Status status; MPI_Status status; struct Message m; struct Message m; MPI_Recv( &m, sizeof( m ), MPI_UNSIGNED_CHAR, MPI_Recv( &m, sizeof( m ), MPI_UNSIGNED_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status ); printf( "worker %d (%d): received message. \n", mpiRank, myPID ); printf( "worker %d (%d): received message. \n", mpiRank, myPID );}//----------------------------------------------------------------------
26
More useful MPI functions MPI_Barrier - Blocks until all process have reached this routine. int MPI_Barrier ( MPI_Comm comm ); INPUT PARAMETERS comm - communicator (handle) comm - communicator (handle)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.