Download presentation
Presentation is loading. Please wait.
Published byNickolas Palmer Modified over 9 years ago
1
Parallel Programming with MPI By, Santosh K Jena.
2
Contents Introduction Architecture Design of parallel Algorithms Communicators P2P Communication Global Communication Datatypes Debugging Strategies Conclusion
3
Introduction To Parallel Programming
4
Shared Memory Architecture
6
Designing Parallel Algorithms Stages for designing parallel algorithms –Partitioning –Communication –Agglomeration –Mapping
7
The Message Passing Interface (MPI) is an open standard. Available at public domain in free. Designed for portability, flexibility, consistency and performance. No extra H/W required and supports all commercial computer. Why do we use MPI ?
8
Message Passing. Communicator Rank Process / Task. Message Passing Library. Send / Receive Synchronous / Asynchronous Application Buffer System Buffer Basic Terminologies
11
Communicators A collection of processes that can send messages to each other. Allow one to divide up processes so that they may perform (relatively) independent work. Default Communicator - MPI_COMM_WORLD A communicator specifies a communication domain made up of a group of processes.
12
General MPI Program Structure
13
Example
14
Grouping Data for Communication
16
Derived Types typedef struct { float a; float b; int n; } INDATA_TYPE; INDATA_TYPE indata; MPI_Bcast(&indata,1,INDATA_TYPE,0,MPI_COMM_ WORLD)
17
int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) P2P Communication
18
Send and receive functions allow the communication of typed data with an associated tag typing is required for portability tagging enables a form of message selection or identification Receive will only return after message data is stored in receive buffer. P2P Communication
19
#include #include “mpi.h” main(int argc, char **argv){ int myrank; //rank of process int p; //number of processes int source; //rank of sender int dest; //rank of receiver int tag=50; //tag for messages char message[100]; MPI_Status status; P2P Communication
20
MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&myrank); MPI_Comm_size(MPI_COMM_WORLD,&p); if(myrank!=0){ sprintf(message,”Hello from %d!”,myrank); dest=0; MPI_Send(message,strlen(message),MPI_CHAR,dest,tag,MPI_COMM_WORLD); } else { for(source=1;source<p;source++) { MPI_Recv(message,100,MPI_CHAR,source,tag,MPI_COMM_WORLD,&status); printf(“%s\n”,message); } MPI_Finalize(); }
21
mpicc –o helloworld helloworld.c mpirun -np 4./helloworld Hello from 1! Hello from 2! Hello from 3! Hello World - Output
22
Will always deadlock, if(rank==0) { MPI_Recv(...); MPI_Send(...); } else if(rank==1) { MPI_Recv(...); MPI_Send(...); } Deadlock
23
Always succeeds, if(rank==0) { MPI_Send(...); MPI_Recv(...); } else if(rank==1) { MPI_Recv(...); MPI_Send(...); }
24
MPI: Global Communications no tag provided, messages are matched by order of execution within the group. intercommunicators are not allowed. you cannot match these calls with P2P receives.
25
data processes a0 data processes a0 data processes a0a1a2 data processes a0 a1 a2 data processes a0 b0 c0 data processes a0 b0c0 a0 b0c0 a0 b0c0 data processes a0 b0 c0 data processes a0 b0c0 a1 b1c1 a2 b2c2 a1a2 b1b2 c1c2 broadcast scatter gather allgather alltoall
26
Collective Communication Broadcast Barrier int MPI_Bcast(void* buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm) int MPI_Barrier(MPI_Comm comm) A A A A A Broadcast P0 P1 P2 P3 broadcasts a message from process with rank root in comm to all other processes in comm.
27
MPI_Bcast( ……..) for(row = myrank ; row < Total_Size ; row += No_Of_Proc ) { /* Do Some Operation */ } Example
28
int MPI_Gather(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) each process in comm (including root itself) sends its sendbuf to root. the root process receives the messages in recvbuf in rank order. MPI: Gather
29
MPI: Scatter int MPI_Scatter(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) inverse to MPI_Gather.
30
MPI: Allgather int MPI_Allgather(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm) similar to MPI_Gather except now all processes receive the result. recvbuf is NOT ignored. j th block of data sent from each process is received by every process and placed in the j th block of recvbuf.
31
Bugs In MPI Program Don’t put a call to a collective communication (MPI_Reduce or MPI_Bcast) inside conditionally executed code. Two or more process tries to exchange data but call to a MPI_Recv function before MPI_Send function.
32
Bugs In MPI Program A process tries to receive data from a process that will never send it. A process tries to receive data from it self Type mismatch between send and receive. Eg. Sending data type MPI_INT and at receives with MPI_FLOAT.
33
Practical Debugging Strategies First program should run in a single processor. Put printf statement at both sending side and receiving end, to make sure values match. Put fflush(stdout), after every printf statement.
34
Conclusion Much can be accomplished with a small subset of MPI’s functionality. But there is also a lot of depth to MPI’s design to allow more complicated functionality.
35
References –www.mpi-forum.orgwww.mpi-forum.org –www.mcs.anl.gov/mpiwww.mcs.anl.gov/mpi –www.google.com –http://www.abo.fi/~mats/HPC1999/examples / –William Groop, et. al. Using MPI:Portable Parallel Programming with the Message Passing Interface, MIT Press –Peter S. Pacheco, Parallel Programming with MPI. Morgan Kaufmann, –Lan Foster, Designing and Building Parallel Programs, Addison Wesley –Parallel Programming In C with MPI and OpenMP, Micheal J. Quinn.
36
Any Question ??
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.