Presentation is loading. Please wait.

Presentation is loading. Please wait.

MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI.

Similar presentations


Presentation on theme: "MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI."— Presentation transcript:

1 MPI: the last episode By: Camilo A. Silva

2 Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI objects, tools for evaluating programs, and multiple program connection

3 Modularity What is a modular design? -The basic idea underlying modular design is to organize a complex system (such as a large program, an electronic circuit, or a mechanical device) as a set of distinct components that can be developed independently and then plugged together.

4 Why is it important? Programs may need to incorporate multiple parallel algorithms Large programs can be controlled by using modular designs Modular design increases reliability and reduces costs

5 Modular design principles Provide simple interfaces Ensure that modules hide information Usage of appropriate tools

6 Modular design checklist The following design checklist can be used to evaluate the success of a modular design. As usual, each question should be answered in the affirmative. 1.Does the design identify clearly defined modules? 2.Does each module have a clearly defined purpose? (Can you summarize it in one sentence?) 3.Is each module's interface sufficiently abstract that you do not need to think about its implementation in order to understand it? Does it hide its implementation details from other modules? 4.Have you subdivided modules as far as usefully possible? 5.Have you verified that different modules do not replicate functionality? 6.Have you isolated those aspects of the design that are most hardware specific, complex, or otherwise likely to change?

7 Applying modularity in parallel programs Three (3) general forms of modular composition exist in parallel programs: sequential, parallel, and concurrent

8 Applying modularity using MPI MPI supports modular programming –Provides information hiding –Encapsulates internal communication Communicators are always specified by an MPI communication –Identifies the process group –identifies the context

9 Implementing flexibility in communicators In the previous discussions, all communication operations have used the default communicator MPI_COMM_WORLD, which incorporates all processes involved in an MPI computation and defines a default context. There are other functions that add flexibility to the communicator and its context: –MPI_COMM_DUP –MPI_COMM_SPLIT –MPI_INTERCOMM_CREATE –MPI_COMM_FREE.

10 Details of functions

11 Creating communicators A call of the form MPI_COMM_DUP(comm, newcomm) creates a new communicator newcomm comprising the same processes as comm but with a new context. integer comm, newcomm, ierr ! Handles are integers... call MPI_COMM_DUP(comm, newcomm, ierr) ! Create new context call transpose(newcomm, A) ! Pass to library call MPI_COMM_FREE(newcomm, ierr) ! Free new context

12 Partitioning processes The term parallel composition is used to denote the parallel execution of two or more program components on disjoint sets of processors Program 1: MPI_Comm comm, newcomm; int myid, color; MPI_Comm_rank(comm, &myid); color = myid%3; MPI_Comm_split(comm, color, myid, &newcomm); Program 2: MPI_Comm comm, newcomm; int myid, color; MPI_Comm_rank(comm, &myid); if (myid < 8) /* Select first 8 processes */ color = 1; else /* Others are not in group */ color = MPI_UNDEFINED; MPI_Comm_split(comm, color, myid, &newcomm);

13 Communicating between groups

14 Datatypes CODE 1 call MPI_TYPE_CONTIGUOUS(10, MPI_REAL, tenrealtype, ierr) call MPI_TYPE_COMMIT(tenrealtype, ierr) call MPI_SEND(data, 1, tenrealtype, dest, tag, $ MPI_COMM_WORLD, ierr) CALL MPI_TYPE_FREE(tenrealtype, ierr) CODE 2 float data[1024]; MPI_Datatype floattype; MPI_Type_vector(10, 1, 32, MPI_FLOAT, &floattype); MPI_Type_commit (&floattype); MPI_Send(data, 1, float type, dest, tag, MPI_COMM_WORLD); MPI_Type_free(&floattype);

15 Heterogeneity MPI datatypes have two main purposes Heterogenity --- parallel programs between different processors Noncontiguous data --- structures, vectors with non-unit stride, etc. Basic datatype, corresponding to the underlying language, are predefined. The user can construct new datatypes at run time; these are called derived datatypes.

16 Datatypes Elementary: –Language-defined types (e.g., MPI_INT or MPI_DOUBLE_PRECISION ) Vector: –Separated by constant ``stride'' Contiguous: –Vector with stride of one Hvector: –Vector, with stride in bytes Indexed: –Array of indices (for scatter/gather) Hindexed: –Indexed, with indices in bytes Struct: –General mixed types (for C structs etc.)

17 Vectors To specify this row (in C order), we can use MPI_Type_vector( count, blocklen, stride, oldtype, &newtype ); MPI_Type_commit( &newtype ); The exact code for this is MPI_Type_vector( 5, 1, 7, MPI_DOUBLE, &newtype ); MPI_Type_commit( &newtype );

18 Structures Structures are described by arrays of number of elements (array_of_len) displacement or location (array_of_displs) datatype (array_of_types) MPI_Type_structure( count, array_of_len, array_of_displs, array_of_types, &newtype );

19 Structure example

20 Buffering Issues Where does data go when you send it? One possibility is:

21 Better buffering This is not very efficient. There are three copies in addition to the exchange of data between processes. We prefer But this requires that either that MPI_Send not return until the data has been delivered or that we allow a send operation to return before completing the transfer. In this case, we need to test for completion later.

22 Blocking + Non-blocking communication So far we have used blocking communication: -- MPI_Send does not complete until buffer is empty (available for reuse). -- MPI_Recv does not complete until buffer is full (available for use). Simple, but can be ``unsafe'': –Completion depends in general on size of message and amount of system buffering.

23 Solutions to the “unsafe” problem Order the operations more carefully: Supply receive buffer at same time as send, with MPI_Sendrecv: Use non-blocking operations: Use MPI_Bsend

24 Non blocking operations Non-blocking operations return (immediately) ``request handles'' that can be waited on and queried: MPI_Isend(start, count, datatype, dest, tag, comm, request) MPI_Irecv(start, count, datatype, dest, tag, comm, request) MPI_Wait(request, status) One can also test without waiting: MPI_Test( request, flag, status)

25 Multiple completions It is often desirable to wait on multiple requests. An example is a master/slave program, where the master waits for one or more slaves to send it a message. MPI_Waitall(count, array_of_requests, array_of_statuses) MPI_Waitany(count, array_of_requests, index, status) MPI_Waitsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses) There are corresponding versions of test for each of these.

26 Fairness

27 An parallel algorithm is fair if no process is effectively ignored. In the preceeding program, processes with low rank (like process zero) may be the only one whose messages are received. MPI makes no guarentees about fairness. However, MPI makes it possible to write efficient, fair programs.

28 Communication Modes MPI provides mulitple modes for sending messages: Synchronous mode ( MPI_Ssend): the send does not complete until a matching receive has begun. (Unsafe programs become incorrect and usually deadlock within an MPI_Ssend.) Buffered mode ( MPI_Bsend): the user supplies the buffer to system for its use. (User supplies enough memory to make unsafe program safe). Ready mode ( MPI_Rsend): user guarantees that matching receive has been posted. -- allows access to fast protocols -- undefined behavior if the matching receive is not posted Non-blocking versions: MPI_Issend, MPI_Irsend, MPI_Ibsend Note that an MPI_Recv may receive messages sent with any send mode.

29 Buffered Send MPI provides a send routine that may be used when MPI_Isend is awkward to use (e.g., lots of small messages). MPI_Bsend makes use of a user-provided buffer to save any messages that can not be immediately sent. int bufsize; char *buf = malloc(bufsize); MPI_Buffer_attach( buf, bufsize );... MPI_Bsend(... same as MPI_Send... );... MPI_Buffer_detach( &buf, &bufsize ); The MPI_Buffer_detach call does not complete until all messages are sent.

30 Performance Issues

31

32 MPICH2 MPICH2 is an all-new implementation of the MPI Standard, designed to implement all of the MPI-2 additions to MPI (dynamic process management, one-sided operations, parallel I/O, and other extensions) and to apply the lessons learned in implementing MPICH1 to make MPICH2 more robust, efficient, and convenient to use.

33 MPICH2: MPI compilation basic info 1.mpiexec -n 32 a.out 2.mpiexec -n 1 -host loginnode master : -n 32 -host smp slave 3.mpdtrace

34 Other topics: MPI Objects MPI has a variety of objects (communicators, groups, datatypes, etc.) that can be created and destroyed

35 MPI Objects MPI_Request –Handle for nonblocking communication, normally freed by MPI in a test or wait MPI_Datatype –MPI datatype. Free with MPI_Type_free. MPI_Op –User-defined operation. Free with MPI_Op_free. MPI_Comm –Communicator. Free with MPI_Comm_free. MPI_Group –Group of processes. Free with MPI_Group_free. MPI_Errhandler –MPI errorhandler. Free with MPI_Errhandler_free.

36 Freeing objects MPI_Type_vector( ly, 1, nx, MPI_DOUBLE, &newx1 ); MPI_Type_hvector( lz, 1, nx*ny*sizeof(double), newx1, &newx ); MPI_Type_free( &newx1 ); MPI_Type_commit( &newx );

37 Other topics: tools for evaluating programs MPI provides some tools for evaluating the performance of parallel programs. These are: –Timer –Profiling interface

38 MPI Timer The elapsed (wall-clock) time between two points in an MPI program can be computed using MPI_Wtime: double t1, t2; t1 = MPI_Wtime();... t2 = MPI_Wtime(); printf( "Elapsed time is %f\n", t2 - t1 ); The value returned by a single call to MPI_Wtime has little value.

39 MPI Profiling Mechanisms All routines have two entry points: MPI_... and PMPI_.... This makes it easy to provide a single level of low-overhead routines to intercept MPI calls without any source code modifications. Used to provide ``automatic'' generation of trace files. static int nsend = 0; int MPI_Send( start, count, datatype, dest, tag, comm ) { nsend++; return PMPI_Send( start, count, datatype, dest, tag, comm ) }

40 Profiling routines

41 Log Files

42 Creating Log Files This is very easy with the MPICH implementation of MPI. Simply replace -lmpi with -llmpi -lpmpi -lm in the link line for your program, and relink your program. You do not need to recompile. On some systems, you can get a real-time animation by using the libraries -lampi -lmpe -lm -lX11 -lpmpi. Alternately, you can use the -mpilog or -mpianim options to the mpicc or mpif77 commands.

43 Other topics: connecting several programs together MPI provides support for connection separate message- passing programs together through the use of intercommunicators.

44 Exchanging data between programs Form intercommunicator (MPI_INTERCOMM_CREATE) Send data MPI_Send(..., 0, intercomm ) MPI_Recv( buf,..., 0, intercomm ); MPI_Bcast( buf,..., localcomm ); More complex point-to-point operations can also be used

45 Collective operations Use MPI_INTERCOMM_MERGE to create an intercommunicator.

46 Conclusion So we learned: P2P, Collective, and asynchronous communications Modular programming techniques Data types MPICH2 basic compilation info Important and handy tools

47 References http://www- unix.mcs.anl.gov/dbpp/text/node1.htmlhttp://www- unix.mcs.anl.gov/dbpp/text/node1.html http://www- unix.mcs.anl.gov/mpi/tutorial/gropp/talk.ht ml#Node0http://www- unix.mcs.anl.gov/mpi/tutorial/gropp/talk.ht ml#Node0


Download ppt "MPI: the last episode By: Camilo A. Silva. Topics Modularity Data Types Buffer issues + Performance issues Compilation using MPICH2 Other topics: MPI."

Similar presentations


Ads by Google