Download presentation
Presentation is loading. Please wait.
Published byMaximilian Barrett Modified over 9 years ago
1
PP Lab MPI programming VI
2
Program 1 Break up a long vector into subvectors of equal length. Distribute subvectors to processes. Let them compute the partial sums. Collect the partial sums from the processes and add them, to deliver the final sum to the user.
3
Functions to be used MPI_Reduce: Reduces values on all processes to a single value Synopsis #include "mpi.h" int MPI_Reduce ( void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm ) Input Parameters: – sendbuf: address of send buffer – count: number of elements in send buffer – datatype: data type of elements of send buffer – op: reduce operation – root: rank of root process – comm: communicator Output Parameter: – recvbuf: address of receive buffer (significant only at root)
4
Reduction Operations Operation HandleOperation MPI_MAXMaximum MPI_MINMinimum MPI_PRODProduct MPI_SUMSum MPI_LANDLogical AND MPI_LORLogical OR MPI_LXORLogical Exclusive OR MPI_BANDBitwise AND MPI_BORBitwise OR MPI_BXORBitwise Exclusive OR MPI_MAXLOCMaximum value and location MPI_MINLOCMinimum value and location
5
Functions to be used MPI_Scatter: Sends data from one task to all other tasks in a group. Synopsis: #include "mpi.h" int MPI_Scatter ( void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm ) Input Parameters: – sendbuf: address of send buffer – sendcount: number of elements sent to each process – sendtype: data type of send buffer elements the above three arguments are significant only at root. – recvcount: number of elements in receive buffer – recvtype: data type of receive buffer elements – root: rank of sending process – comm: communicator Output Parameter: – recvbuf: address of receive buffer
6
Code #include main (int argc,char *argv[]){ int i, rank, b[3], psum, tsum; int a[9]={1,2,3,4,5,6,7,8,9}; MPI_Init (&argc,&argv); MPI_Comm_rank (MPI_COMM_WORLD, &rank); MPI_Scatter (a, 3, MPI_INT, b, 3, MPI_INT, 0, MPI_COMM_WORLD); psum=0; for(i=0;i<3;i++) psum+=b[i]; MPI_Reduce (&psum, &tsum, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); If(rank==0) printf (“sum of the vector is %d\n”,tsum); MPI_Finalize();}
7
Program 2 Execute the prefix sum problem on 8 processes. s 0 =x 0 s 1 =x 0 +x 1 =s 0 +x 1 s 2 =x 0 +x 1 +x 2 =s 1 +x 2 s 3 =x 0 +x 1 +x 2 +x 3 =s 2 +x 3 … s 7 =x 0 +x 1 +x 2 +…+x 7 =s 6 +x 7
8
Function to be used MPI_Scan: Computes the scan (partial reductions) of data on a collection of processes Synopsis: #include "mpi.h" int MPI_Scan (void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) Input Parameters – sendbuf: starting address of send buffer – count: number of elements in input buffer – datatype: data type of elements of input buffer – op: operation – comm: communicator Output Parameter: – recvbuf: starting address of receive buffer
9
0 0 3 3 2 2 1 1 4 4 7 7 6 6 5 5 0 0 6 6 3 3 1 1 10 28 21 15 0 12 34 56 7 0 12 34 56 7
10
Code #include main(int argc, char**argv){ int id, a, b; MPI_Init(&argc,&argv); MPI_Comm_rank (MPI_COMM_WORLD, &id); a=id; MPI_Scan (&a, &b, 1, MPI_INT, MPI_SUM, MPI_COMM_WORLD) ; printf (“process=%d\tprefix sum=%d\n”,id, b); MPI_Finalize ();}
11
Assignment Write and explain the argument lists of the following and say how they are different from the two functions you have seen: – MPI_Allreduce – MPI_Reduce_scatter
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.