Download presentation
Presentation is loading. Please wait.
Published byMartha Montgomery Modified over 9 years ago
1
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN 14.06.2012
2
2 Scope of communications Collective Communication Collective Communication Routines MPI_Barrier MPI_Bcast MPI_Reduce MPI_Allreduce MPI_Gather (MPI_Gatherv) MPI_Allgather MPI_Scatter (MPI_Scatterv) 14.06.2012MPI Collective Communication Outline
3
3 Point-to-point : Involves two tasks: One task acting as the sender/producer of data, The other acting as the receiver/consumer. 14.06.2012MPI Collective Communication Scope of communications
4
4 Collective : Involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective. 14.06.2012MPI Collective Communication Scope of communications
5
5 Collective communication must involve all processes in the scope of a communicator. All processes are by default, members in the communicator MPI_COMM_WORLD. Types of Collective Operations: Synchronization: Processes wait until all members of the group have reached the synchronization point. Data Movement: Broadcast, Scatter/gather, All to all. 14.06.2012MPI Collective Communication Collective Communication
6
6 Collective Computation (reductions): One member of the group collects data from the other members and performs an operation (min, max, add, multiply, etc.) on that data. Programming Considerations and Restrictions: Collective operations are blocking. Collective communication routines do not take message tag arguments. 14.06.2012MPI Collective Communication Collective Communication
7
7 MPI_Barrier Creates a barrier synchronization in a group. Each task, when reaching the MPI_Barrier call, blocks until all tasks in the group reach the same MPI_Barrier call. int MPI_Barrier ( comm ) MPI_BARRIER ( comm,ierr ) 14.06.2012MPI Collective Communication Collective Communication
8
8 MPI_Bcast: The MPI_Bcast routine enables you to copy data from the memory of the root processor to the same memory locations for other processors in the communicator. 14.06.2012MPI Collective Communication Collective Communication Routines
9
9 MPI_Bcast: int MPI_Bcast (void* buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm ) MPI_BCAST (buffer, count, MPI_Datatype datatype, root, MPI_Comm comm,ierr ) INPUT/OUTPUT PARAMETERS buffer - starting address of buffer (choice) count - number of entries in buffer (integer) datatype - data type of buffer (handle) root - rank of broadcast root (integer) comm - communicator (handle) 14.06.2012MPI Collective Communication Collective Communication Routines
10
10 MPI_Reduce: Collects data from each processor, Reduces these data to a single value (e.g. sum or max) Stores the reduced result on the root processor 14.06.2012MPI Collective Communication Collective Communication Routines
11
11 MPI_Reduce: int MPI_Reduce(void* send_buffer, void* recv_buffer, int count, MPI_Datatype datatype, MPI_Op operation, int root, MPI_Comm comm ) The send buffer is defined by the arguments send_buffer, count, and datatype. The receive buffer is defined by the arguments recv_buffer, count, and datatype. Both buffers have the same number of elements with the same type. MPI_REDUCE(send_buffer,recv_buffer, count, MPI_Datatype datatype, MPI_Op operation, root, MPI_Comm comm,ierr ) 14.06.2012MPI Collective Communication Collective Communication Routines
12
12 MPI_Reduce: int MPI_Reduce(void* send_buffer, void* recv_buffer, int count, MPI_Datatype datatype, MPI_Op operation, int rank, MPI_Comm comm ) send_buffer in address of send buffer recv_buffer out address of receive buffer count in number of elements in send buffer datatype in data type of elements in send buffer operation in reduction operation rank in rank of root process comm in mpi communicator 14.06.2012MPI Collective Communication Collective Communication Routines
13
13 14.06.2012MPI Collective Communication MPI Reduction Operation
14
14 Sum on array using MPI_Reduce 14.06.2012MPI Collective Communication MPI_Reduce
15
15 MPI_Allreduce: int MPI_Allreduce ( void *sendbuf, void *recvbuf, int count,MPI_Datatype datatype, MPI_Op op, MPI_Comm comm ) After the data are reduced into root processor, You could then MPI_Bcast the reduced data to all of the other processors. It is more convenient and efficient to reduce and broadcast with the single MPI_Allreduce operation. MPI_Allreduce, there is no root processor to reduce and broadcast the result. MPI_ALLREDUCE ( sendbuf, recvbuf, count,MPI_Datatype datatype, MPI_Op op, MPI_Comm comm,ierr ) 14.06.2012MPI Collective Communication Collective Communication Routines
16
16 Sum on array using MPI_Allreduce 14.06.2012MPI Collective Communication MPI_Allreduce
17
17 MPI_Gather: It is an an all-to-one communication routine. The receive arguments are only meaningful to the root process. When MPI_Gather is called, each process (including the root process) sends the contents of its send buffer to the root process. The root process receives the messages and stores them in rank order. int MPI_Gather (void* send_buffer, int send_count, MPI_datatype send_type, void* recv_buffer, int recv_count, MPI_Datatype recv_type, int root, MPI_Comm comm ) 14.06.2012MPI Collective Communication Collective Communication Routines
18
18 MPI_Gather: MPI_GATHER (send_buffer, send_count, MPI_datatype send_type, recv_buffer,recv_count, MPI_Datatype recv_type, root, MPI_Comm comm,ierr ) 14.06.2012MPI Collective Communication Collective Communication Routines
19
19 MPI_Gatherv: MPI_Gatherv (send_buffer, send_count, MPI_datatype send_type, recv_buffer,recv_count, displs, MPI_Datatype recv_type, root, MPI_Comm comm ) MPI_GATHERV (send_buffer, send_count, MPI_datatype send_type, recv_buffer,recv_count, displys MPI_Datatype recv_type, root, MPI_Comm comm,ierr ) 14.06.2012MPI Collective Communication Collective Communication Routines
20
20 MPI_Gatherv: 14.06.2012MPI Collective Communication Collective Communication Routines
21
21 MPI_Allgather: After the data are gathered into root processor, You could then MPI_Bcast the gathered data to all of the other processors. It is more convenient and efficient to gather and broadcast with the single MPI_Allgather operation. int MPI_Allgather ( void *sendbuf, int sendcount, MPI_Datatype sendtype,void *recvbuf, int recvcount, MPI_Datatype recvtype,MPI_Comm comm ) 14.06.2012MPI Collective Communication Collective Communication Routines
22
22 MPI_Allgather: MPI_AllGATHER (sendbuf,sendcount, MPI_Datatype sendtype,recvbuf, recvcount, MPI_Datatype recvtype,MPI_Comm comm,ierr ) 14.06.2012MPI Collective Communication Collective Communication Routines
23
23 MPI_Scatter: A one-to-all communication. Different data are sent from the root process to each process (in rank order). When MPI_Scatter is called, The root process breaks up a set of contiguous memory locations into equal chunks and sends one chunk to each processor. The outcome is the same as if the root executed N MPI_Send operations and each process executed an MPI_Recv. In next page, send_count is number of elements sent to each process, not the total number sent. 14.06.2012MPI Collective Communication Collective Communication Routines
24
24 MPI_Scatter: int MPI_Scatter(void* send_buffer, int send_count, MPI_datatype send_type, void* recv_buffer, int recv_count, MPI_Datatype recv_type, int root, MPI_Comm comm ) 14.06.2012MPI Collective Communication Collective Communication Routines
25
25 MPI_Scatter: MPI_SCATTER(send_buffer,send_count, MPI_datatype send_type,recv_buffer,recv_count, MPI_Datatype recv_type, int root, MPI_Comm comm,ierr ) 14.06.2012MPI Collective Communication Collective Communication Routines
26
26 MPI_Scatterv: Distributes individual messages from root to each process in communicator. Messages can have different sizes and displacement 14.06.2012MPI Collective Communication Collective Communication Routines
27
27 MPI_Scatterv: int MPI_Scatterv(void *send_buffer, int send_counts, int *displs MPI_datatype send_type, void* recv_buffer, int recv_count, MPI_Datatype recv_type, int root, MPI_Comm comm ) int MPI_SCATTERV(void *send_buffer, int send_counts, int *displs MPI_datatype send_type, void* recv_buffer, int recv_count, MPI_Datatype recv_type, int root, MPI_Comm comm ) 14.06.2012MPI Collective Communication Collective Communication Routines
28
28 MPI_Scatterv: sendbufaddress of send buffer (choice, significant only at root) sendcountsinteger array (of length group size) specifying the number of elements to send to each processor displsinteger array (of length group size). Entry i specifies the displacement (relative to sendbuf from which to take the outgoing data to process i) sendtype data type of send buffer elements (handle) recvcount number of elements in receive buffer (integer) recvtype data type of receive buffer elements (handle) root rank of sending process (integer) comm communicator (handle) 14.06.2012MPI Collective Communication Collective Communication Routines
29
29 MPI_Scatterv: 14.06.2012MPI Collective Communication Collective Communication Routines
30
30 –Writing parallel MPI codes using following routines Bcast Gather (gather a value / an array, allgather) Scatter Reduce (allreduce) 14.06.2012MPI Collective Communication Programming Activities
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.