Download presentation
Presentation is loading. Please wait.
1
Distributed Memory Programming with MPI
2
What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both flexible and portable. MPI is a library o All operations are performed with routine calls found in mpi.h MPI jobs consist of running copies of the same program in multiple processes. MPI can be difficult o Deadlock o Data race o Non-determinism
3
Gathering Essential Information MPI_Comm_size o Reports the number of processes participating in this job. o Can be any real number and will not change for the duration of the MPI job. MPI_Comm_rank o Reports the rank of the calling process. This will be used as an identifier later on. o In the range 0 to size-1.
4
Hello World! #include int main( int argc, char **argv) { int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); printf( "I am %d of %d\n", rank, size); MPI_Finalize(); return 0; }
5
Running an MPI job SSH into the head node (via flip). o ssh @sonic-1.eecs.oregonstate.edu -p 908 Generate a public/private keypair o ssh-keygen -t rsa set path to /home/users/ /.ssh/id_rsa leave password blank o cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys Create a list of all the names of the hosts in the cluster (separated by newlines) and write it to a file called hostnames. o adam01 o adam02 o... o adam14 Execute a normal command across all hosts with mpirun o mpirun -hostfile hostnames -np 14 hostname Each host should return its name The first time you run this you will have to accept each connection by typing "yes". Compile with 'mpicc' instead of 'gcc'.
6
MPI Datatypes MPI_CHAR signed char MPI_INT signed int MPI_FLOATfloat MPI_DOUBLEdouble MPI_PACKEDdata packed or unpacked with MPI_Pack()/ MPI_Unpack Messages are described by address, count, and datatype. There are MPI functions to pack non-contiguous data. o Try to avoid them.
7
MPI Tags Messages are sent with an accompanying user defined integer tag, to assist in being identified on the receiving end. A receiver can screen messages by specifying a specific tag, or it can specify MPI_ANY_TAG to disregard the tag field.
8
(Blocking) Point-to-Point Messages MPI_Send (buffer, count, datatppe, dest, tag, comm) o Target process is specified by dest (rank within comm) o When the function returns, the buffer can be reused, but the message may not have been received by the target process. MPI_Recv (buffer, count, datatype, source, tag, comm, status) o Blocks until a matching message is received. o Receiving fewer than count is OK, but receiving more is an error. o status contains additional information such as the size of message.
9
A Basic Example #include int main( int argc, char **argv) { int rank, buf; MPI_Status status; MPI_Init(&argv, &argc); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); /* Process 0 sends and Process 1 receives */ if (rank == 0) { buf = 1234; MPI_Send( &buf, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); } else if (rank == 1) { MPI_Recv( &buf, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status); printf( "Received %d\n", buf); } MPI_Finalize(); return 0; }
10
Collective Operations Called by all processes involved in the MPI job. MPI_Bcast o distributes data from one process to all others. MPI_Reduce o combines data from all processes, returns it to one process, and executes a reducing operation. operations include: MPI_MAX, MPI_PROD, MPI_SUM,... In some situations Send/Recv can be replaced by collective operations, improving simplicity as well as efficiency.
11
Buffers & Deadlocking Sent messages are stored in a system buffer on the receiving end. o If there is insufficient storage at the destination, the sender must wait for the user to provide the memory space. MPI does not guarantee buffering o Memory is a finite resource
12
One Solution Non-blacking operation returns immediately and provide a request variable which can be tested and waited on. MPI_Request request; MPI_Status status; MPI_Isend(start, count, datatype, dest, tag, comm, &reqest); MPI_Irecv(start, count, datatype, dest, tag, comm, &request); MPI_Wait(&request, &status); or MPI_Test(&request, &flag, &status);
13
Questions? email: strassek@onid.orst.edu References http://www.open-mpi.org/ http://www.mpi-forum.org http://www.mcs.anl.gov/research/projects/mpi/
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.