Download presentation
Presentation is loading. Please wait.
Published byHoratio Garrison Modified over 9 years ago
1
MA471Fall 2002 Lecture5
2
More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank –MPI_Send, MPI_Recv –MPI_Barrier Only MPI_Send and MPI_Recv truly communicate messages.. These are “point to point” communications i.e. process to process communication
3
MPI_Isend Unlike MPI_Send, MPI_Isend does not wait for the output buffer to be free for further use before returning This mode of action is known as “non- blocking” http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Isend.html
4
MPI_Isend details MPI_Isend: Begins a nonblocking send Synopsis int MPI_Isend( void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request ) Input Parameters buf initial address of send buffer (choice) count number of elements in send buffer (integer) datatype datatype of each send buffer element (handle) dest rank of destination (integer) tag message tag (integer) comm communicator (handle) Output Parameter request communication request (handle) http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Isend.html
5
MPI_Isend analogy Analogy time…. Isend is like calling the mailperson to take a letter away and receiving a tracking number You don’t know if the letter is gone until you check your mailbox (i.e. check online with the tracking number). When you know the letter is gone you can use the letterbox again… (strained analogy).
6
MPI_Irecv Post a non-blocking receive request This routine exits without necessarily completing the message receive We use MPI_Wait to see if the requested message is in..
7
MPI_Irecv details MPI_Irecv: Begins a nonblocking receive Synopsis int MPI_Irecv( void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request ) Input Parameters buf initial address of receive buffer (choice) count number of elements in receive buffer (integer) datatype datatype of each receive buffer element (handle) source rank of source (integer) tag message tag (integer) comm communicator (handle) Output Parameter Request communication request (handle) http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Irecv.html
8
MPI_Irecv analogy Analogy time…. Irecv is like telling the mailbox to anticipate the delivery of a letter. You don’t know if the letter has arrived until you check your mailbox (i.e. check online with the tracking number). When you know the letter is here you can open it and read it..
9
MPI_Wait Wait for a requested MPI operation to complete
10
MPI_Wait details MPI_Wait: Waits for an MPI send or receive to complete Synopsis int MPI_Wait ( MPI_Request *request, MPI_Status *status) Input Parameter request request (handle) Output Parameter Status status object (Status). May be MPI_STATUS_NULL. http://www-unix.mcs.anl.gov/mpi/www/www3/MPI_Wait.html
11
Example: Isend, Irecv, Wait Sequence
12
MPI_Request ISENDrequest; MPI_Status ISENDstatus; MPI_Request IRECVrequest; MPI_Status IRECVstatus; char *bufout = strdup("Hello"); int bufoutlength = strlen(bufout); int bufinlength = bufoutlength; /* for convenience */ char *bufin = (char*) calloc(bufinlength, sizeof(char)); int TOprocess = (Nprocs-1) - procID; int TOtag = 10000*procID + TOprocess; int FROMprocess = (Nprocs-1) - procID; int FROMtag = 10000*FROMprocess + procID; fprintf(stdout, "Sending: %s To process %d \n", bufout, TOprocess); info = MPI_Isend(bufout, bufoutlength, MPI_CHAR, TOprocess, TOtag, MPI_COMM_WORLD, &ISENDrequest); info = MPI_Irecv(bufin, bufinlength, MPI_CHAR, FROMprocess, FROMtag, MPI_COMM_WORLD, &IRECVrequest); fprintf(stdout, "Process %d just about to wait for requests to finish\n", procID); MPI_Wait(&IRECVrequest, &IRECVstatus); MPI_Wait(&ISENDrequest, &ISENDstatus); fprintf(stdout, "Received: %s\n From process: %d\n", bufin, FROMprocess); The isend status wait is a courtesy to make sure that the message has gone out before we go to finalize
13
Profiling Your Code Using Upshot With these parallel codes it can be difficult to foresee every way the code can behave In the following we will see upshot in action upshot is part of the mpi release (for the most part)
14
Example 1 Profiling MPI_Send and MPI_Recv
15
Instructions For Using Upshot Add –mpilog to the compile flags
16
Clean and Recompile ON BLACKBEAR (BB): 1)cp –r ~cs471aa/MA471Lec5 ~/ 2)cd ~/MA471Lec5 3)make –f Makefile.mpeBB clean 4)make –f Makefile.mpeBB 5)qsub MPIcommuning 6)% use ‘qstat’ to make sure the run has finished 7) clog2alog MPIcommuning 8)% make sure that a file MPIcommuning.alog has been created 9)% set up an xserver on your current PC 10) upshot MPIcommuning.alog
17
/* initiate MPI */ int info = MPI_Init(&argc, &argv); /* NOW We can do stuff */ int Nprocs, procID; /* find the number of processes */ MPI_Comm_size(MPI_COMM_WORLD, &Nprocs); /* find the unique identity of this process */ MPI_Comm_rank(MPI_COMM_WORLD, &procID); /* insist that all processes have to go through this routine before the next commands */ info = MPI_Barrier(MPI_COMM_WORLD); /* test a send and recv pair of operations */{ MPI_Status recvSTATUS; char *bufout = strdup("Hello"); int bufoutlength = strlen(bufout); int bufinlength = bufoutlength; /* for convenience */ char *bufin = (char*) calloc(bufinlength, sizeof(char)); int TOprocess = (Nprocs-1) - procID; int TOtag = 10000*procID + TOprocess; int FROMprocess = (Nprocs-1) - procID; int FROMtag = 10000*FROMprocess + procID; fprintf(stdout, "Sending: %s To process %d \n", bufout, TOprocess); info = MPI_Send(bufout, bufoutlength, MPI_CHAR, TOprocess, TOtag, MPI_COMM_WORLD); info = MPI_Recv(bufin, bufinlength, MPI_CHAR, FROMprocess, FROMtag, MPI_COMM_WORLD, &recvSTATUS); fprintf(stdout, "Received: %s\n From process: %d\n", bufin, FROMprocess); } info = MPI_Finalize();
18
Results Viewed In Upshot Click on “Setup”
19
The Main Upshot Viewer This should appear after pressing “Setup”
20
Time History The horizontal axis is physical time, running left to right time
21
Time History Each MPI call is color coded on each process Process
22
Zoom in on Profile 1 2 (1)Process 1 sends message to process 4 (2)Process 4 receives message from process 1
23
Zoom in on Profile 1 3 (1)Process 2 sends message to process 3 (2)Process 3 receives message from process 2 Observations
24
Example 2 Profiling MPI_Isend and MPI_Irecv
25
MPI_Request ISENDrequest; MPI_Status ISENDstatus; MPI_Request IRECVrequest; MPI_Status IRECVstatus; char *bufout = strdup("Hello"); int bufoutlength = strlen(bufout); int bufinlength = bufoutlength; /* for convenience */ char *bufin = (char*) calloc(bufinlength, sizeof(char)); int TOprocess = (Nprocs-1) - procID; int TOtag = 10000*procID + TOprocess; int FROMprocess = (Nprocs-1) - procID; int FROMtag = 10000*FROMprocess + procID; fprintf(stdout, "Sending: %s To process %d \n", bufout, TOprocess); info = MPI_Isend(bufout, bufoutlength, MPI_CHAR, TOprocess, TOtag, MPI_COMM_WORLD, &ISENDrequest); info = MPI_Irecv(bufin, bufinlength, MPI_CHAR, FROMprocess, FROMtag, MPI_COMM_WORLD, &IRECVrequest); fprintf(stdout, "Process %d just about to wait for requests to finish\n", procID); MPI_Wait(&IRECVrequest, &IRECVstatus); MPI_Wait(&ISENDrequest, &ISENDstatus); fprintf(stdout, "Received: %s\n From process: %d\n", bufin, FROMprocess);
26
Profile for Isend, Irecv, Wait sequence Notice: before I called Wait the process could have done a bunch of operations, i.e. avoid all that wasted compute time while the message is in transit! Notice that not much time is spent in Irecv
27
With Work Between (Isend,Irecv) and Wait The neat point here is that while the message was in transit the process could get on a do some computations…
28
Isends Irecvs Close up
29
Lab Activity We will continue with the parallel implementation of your card games Use upshot to profile your code’s parallel activity and include this in your presentations and reports Anyone ready to report yet?
30
Next Lecture Global MPI communication routines Building a simple finite element solver for Poisson’s equation Making the Poisson solver parallel …
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.