Download presentation
Presentation is loading. Please wait.
1
Collective Communication with MPI
Hector Urtubia
2
Introduction Collective communication is a communication pattern that involves all processes in a communicator. For collective communication, important optimizations can be done. The MPI API defines some common collective communications functions.
3
Frequently Used Collective Operations
Broadcast. One process transmits to others. Reduce Many processes transmit to one. Scatter Distributes data among processes. Gather Gathers data stored in many processes.
4
Broadcast Is a communication pattern where a single process transmits the same data to many processes. int MPI_Bcast(void* message, int count, MPI_Datatype datatype, int root, MPI_Comm comm);
5
Reduce Collective communication where all processes contribute data and is combined using a binary operation. int MPI_Reduce(void* operand, void* result, int count, MPI_Datatype datatype, MPI_Op operator, int root, MPI_Comm comm);
6
Reduce (cont) Reduction operations: MPI_MAX Maximum MPI_MIN Minimum
MPI_SUM Sum MPI_PROD Product MPI_LAND Logical and MPI_BAND Bitwise and MPI_LOR Logical or MPI_BOR Bitwise or MPI_LXOR Logical exclusive or MPI_BXOR Bitwise exclusive or MPI_MAXLOC Maximum and location of maximum MPI_MINLOC Minimum and location of minimum
7
Examples for reduce and Broadcast
8
Gather Collects data from each process in a communicator.
MPI_Gather(void* send_data, /* data to be sent */ int send_count, MPI_Datatype recv_type, void *recv_data, int recv_count, int root, /* root process */ MPI_Comm comm); /*communicator*/
9
Scatter It splits the data on one process and distributes it to all the other processes. int MPI_Scatter(void *send_data, int send_count, MPI_Datatype send_type, void* recv_data, int recv_count, MPI_Datatype recv_type, int root, MPI_Comm communicator);
10
Example of Gather and Scatter
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.