Presentation is loading. Please wait.

Presentation is loading. Please wait.

Derived Datatypes and Related Features Self Test with solution.

Similar presentations


Presentation on theme: "Derived Datatypes and Related Features Self Test with solution."— Presentation transcript:

1 Derived Datatypes and Related Features Self Test with solution

2 Self Test 1.You are writing a parallel program to be run on 100 processors. Each processor is working with only one section of a skeleton outline of a 3-D model of a house. In the course of constructing the model house each processor often has to send the three cartesian coordinates (x,y,z) of nodes that are to be used to make the boundaries between the house sections. Each coordinate will be a real value. Why would it be advantageous for you to define a new data type called Point which contained the three coordinates?

3 Self Test a)My program will be more readable and self commenting. b)Since many x,y,z values will be used in MPI communication routines they can be sent as a single Point type entity instead of packing and unpacking three reals each time. c)Since all three values are real, there is no purpose in making a derived data type. d)It would be impossible to use MPI_Pack and MPI_Unpack to send the three real values.

4 Answer a)Incorrect. Actually this is a partially correct answer. Readablity *is* an advantage but perhaps a secondary one compared to the correct answer. b)Correct! If you find yourself repeatedly transferring a set of data, make it your own derived datatype. c)Incorrect. Matter of opinion. You could make a point array containing three reals, but that is an awkward way to handle the problem and does not take advantage of MPI capabilities. d)Incorrect. In fact MPI_Pack and MPI_Unpack are the most versatile routines for transferring any sort of heterogenous data.

5 Self Test 2.What is the simplest MPI derived datatype creation function to make the Point datatype described in problem 1? (Three of the answers given can actually be used to construct the type, but one is the simplest). a)MPI_TYPE_CONTIGUOUS b)MPI_TYPE_VECTOR c)MPI_TYPE_STRUCT d)MPI_TYPE_COMMIT

6 Answer a)Correct! It was designed just for this situation. b)Incorrect. Close, but do entire arrays make up your new data type? c)Incorrect. Close, but are you combining different data types in the new type? d)Incorrect. No. Commit just allows MPI to recognize the new type; it does not create it.

7 Self Test 3.The C syntax for the MPI_TYPE_CONTIGUOUS subroutine is MPI_Type_contiguous (count, oldtype, newtype) The argument names should be fairly self-explanatory, but if you want thier exact definition you can look them up at the MPI home page. For the derived datatype we have been discussing in the previous problems, what would be the values for the count, oldtype, and newtype arguments respectively? a)2, MPI_REAL, Point b)3, REAL, Point c)3, MPI_INTEGER, Coord: d)3, MPI_REAL, Point

8 Answer a)Incorrect. Close, but are we working in 2 or 3 dimensions? b)Incorrect. Sorry. You must use the MPI datatype MPI_REAL. c)Incorrect. Two errors. Go back and read Question 1 carefully. d)Correct! Good job.

9 Self Test 4.In Section “Using MPI Derived Types for User- Defined Types”, the code for creating the derived data type MPI_SparseElt is shown. Using the MPI_Extent function, determine the size (in bytes) of a variable of type MPI_SparseElt. You should probably modify the code found in that section, compile and run it to get the answer.

10 Answer #include struct SparseElt { /* representation of a sparse matrix element */ int location[2]; /* where the element belongs in the overall matrix */ double value; /* the value of the element */ }; struct SparseElt anElement; /* a representative variable of this type */ int main(int argc, char **argv) { int lena[2]; /* the three arrays used to describe an MPI derived type */ MPI_Aint loca[2]; /* their size reflects the number of components in SparseElt */ MPI_Datatype typa[2]; MPI_Aint baseaddress, extent; MPI_Datatype MPI_SparseElt; /* a variable to hold the MPI type indicator for SparseElt */ int err; err = MPI_Init(&argc, &argv);

11 Answer /* set up the MPI description of SparseElt */ MPI_Address(&anElement, &baseaddress); lena[0] = 2; MPI_Address(&anElement.location, &loca[0]); loca[0] -= baseaddress;/* find out the relative location */ typa[0] = MPI_INT; lena[1] = 1; MPI_Address(&anElement.value, &loca[1]); loca[1] -= baseaddress; typa[1] = MPI_DOUBLE; MPI_Type_struct(2, lena, loca, typa, &MPI_SparseElt); MPI_Type_commit(&MPI_SparseElt); MPI_Type_extent(MPI_SparseElt, &extent); printf("Extent: %d\n", extent); MPI_Type_free(&MPI_SparseElt); err = MPI_Finalize(); }

12 Answer The correct value for the extent will depend on the system on which you ran this exercise. On most systems, the extent is 16 bytes (4 for each of the two ints and 8 for the double), but other values are possible. For example, on a T3E, where the ints are 8 bytes each rather than 4, the extent is 24.

13 Course Problem

14 Description –The new problem still implements a parallel search of an integer array. –The program should find all occurrences of a certain integer which will be called the target. –It should then calculate the average of the target value and its index. –Both the target location and the average should be written to an output file. –In addition, the program should read both the target value and all the array elements from an input file.

15 Course Problem Exercise –Modify your code from Chapter 4 to create a program that solves the new Course Problem. –Use the techniques/routines of this chapter to make a new derived type called MPI_PAIR that will contain both the target location and the average. –All of the slave sends and the master receives must use the MPI_Pair type.

16 Solution #include #define N 300 int main(int argc, char **argv) { int i, target;/*local variables*/ int b[N], a[N/3];/*a is name of the array each slave searches*/ int rank, size, err; MPI_Status status; int end_cnt; int gi;/*global index*/ float ave;/*average*/ FILE *sourceFile; FILE *destinationFile; int blocklengths[2] = {1, 1};/* initialize blocklengths array */ MPI_Datatype types[2] = {MPI_INT, MPI_FLOAT};/* initialize types array */ MPI_Aint displacements[2]; MPI_Datatype MPI_Pair; err = MPI_Init(&argc, &argv); err = MPI_Comm_rank(MPI_COMM_WORLD, &rank); err = MPI_Comm_size(MPI_COMM_WORLD, &size);

17 Solution /* Initialize displacements array with */ err = MPI_Address(&gi, &displacements[0]); /* memory addresses */ err = MPI_Address(&ave, &displacements[1]); /* This routine creates the new data type MPI_Pair */ err = MPI_Type_struct(2, blocklengths, displacements, types, &MPI_Pair); /* This routine allows it to be used in communication */ err = MPI_Type_commit(&MPI_Pair); if(size != 4) { printf("Error: You must use 4 processes to run this program.\n"); return 1; } if (rank == 0) { /* File b.data has the target value on the first line */ /* The remaining 300 lines of b.data have the values for the b array */ sourceFile = fopen("b.data", "r"); /* File found.data will contain the indices of b where the target is */ destinationFile = fopen("found.data", "w"); if(sourceFile==NULL) { printf("Error: can't access file.c.\n"); return 1; } else if(destinationFile==NULL) { printf("Error: can't create file for writing.\n"); return 1;

18 Solution } else { /* Read in the target */ fscanf(sourceFile, "%d", &target); for (i=1; i<=3; i++) /*Notice how i is used as the destination process for each send*/ { err = MPI_Send(&target, 1, MPI_INT, i, 9, MPI_COMM_WORLD); } /* Read in b array */ for (i=0; i<N; i++) { fscanf(sourceFile,"%d", &b[i]); } err = MPI_Send(&b[0], 100, MPI_INT, 1, 11, MPI_COMM_WORLD); err = MPI_Send(&b[100], 100, MPI_INT, 2, 11, MPI_COMM_WORLD); err = MPI_Send(&b[200], 100, MPI_INT, 3, 11, MPI_COMM_WORLD); end_cnt = 0; while (end_cnt != 3) { err = MPI_Recv(MPI_BOTTOM, 1, MPI_Pair, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); if (status.MPI_TAG == 52) end_cnt++;/*See Comment*/ else fprintf(destinationFile,"P %d, %d %f\n", status.MPI_SOURCE, gi, ave); } fclose(sourceFile); fclose(destinationFile); }

19 Solution else { err = MPI_Recv(&target, 1, MPI_INT, 0, 9,MPI_COMM_WORLD,&status); err = MPI_Recv(a, 100, MPI_INT, 0, 11,MPI_COMM_WORLD,&status); /* Search the b array and output the target locations */ for (i=0; i<N/3; i++) { if (a[i] == target) { gi = (rank-1)*100+i+1; /*Equation to convert local index to global index*/ ave = (gi+target)/2.0; err = MPI_Send(MPI_BOTTOM, 1, MPI_Pair, 0, 19, MPI_COMM_WORLD); } gi = target; /* Both are fake values */ ave=3.45; /* The point of this send is the "end" tag (See Chapter 4) */ err = MPI_Send(MPI_BOTTOM, 1, MPI_Pair, 0, 52, MPI_COMM_WORLD); /*See Comment*/ } err = MPI_Type_free(&MPI_Pair); err = MPI_Finalize(); return 0; }

20 Solution Note: The sections of code shown in red are new lines added to create the new data type MPI_Pair. The sections of code shown in blue are lines where MPI_Pair is used. The results obtained from running this code are in the file found.data which contains the following: P 1, 62, 36. P 2, 183, 96.5 P 3, 271, 140.5 P 3, 291, 150.5 P 3, 296, 153. Notice that in this new parallel version the master outputs three items: the processor rank and the two parts of the MPI_Pair data type.


Download ppt "Derived Datatypes and Related Features Self Test with solution."

Similar presentations


Ads by Google