Derived Datatypes and Related Features Self Test with solution.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Virtual Topologies Self Test with solution. Self Test 1.When using MPI_Cart_create, if the cartesian grid size is smaller than processes available in.
Reference: / MPI Program Structure.
MPI Program Performance Self Test with solution. Matching 1.Amdahl's Law 2.Profiles 3.Relative efficiency 4.Load imbalances 5.Timers 6.Asymptotic analysis.
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
MPI_Gatherv CISC372 Fall 2006 Andrew Toy Tom Lynch Bill Meehan.
Getting Started with MPI Self Test with solution.
Derived Datatypes and Related Features. Introduction In previous sections, you learned how to send and receive messages in which all the data was of a.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Point-to-Point Communication Self Test with solution.
Collective Communications Self Test with solution.
1 Parallel Programming with MPI: Day 1 Science & Technology Support High Performance Computing Ohio Supercomputer Center 1224 Kinnear Road Columbus, OH.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Page 1 Parallel Programming With MPI Self Test. Page 2 Message Passing Fundamentals.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
Communicators Self Test with solution. Self Test 1.MPI_Comm_group may be used to: a)create a new group. b)determine group handle of a communicator. c)create.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
Collective Communications Solution. #include #define N 300 int main(int argc, char **argv) { int i, target;/*local variables*/ int b[N], a[N/4];/*a is.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Its.unc.edu 1 Derived Datatypes Research Computing UNC - Chapel Hill Instructor: Mark Reed
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
1 MPI Datatypes l The data in a message to sent or received is described by a triple (address, count, datatype), where l An MPI datatype is recursively.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
Director of Contra Costa College High Performance Computing Center
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
1 Why Derived Data Types  Message data contains different data types  Can use several separate messages  performance may not be good  Message data.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Lecture 6: Message Passing Interface (MPI). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
Parallel Programming & Cluster Computing MPI Collective Communications Dan Ernst Andrew Fitz Gibbon Tom Murphy Henry Neeman Charlie Peck Stephen Providence.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
MPI Workshop - III Research Staff Cartesian Topologies in MPI and Passing Structures in MPI Week 3 of 3.
An Introduction to MPI (message passing interface)
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
Timing in MPI Tarik Booker MPI Presentation May 7, 2003.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Grouping Data and Derived Types in MPI. Grouping Data Messages are expensive in terms of performance Grouping data can improve the performance of your.
Message Passing Interface Using resources from
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
User-Written Functions
Introduction to parallel computing concepts and technics
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Message Passing Interface (cont.) Topologies.
Introduction to MPI.
MPI Message Passing Interface
Introduction to MPI CDP.
CS 584.
CS 5334/4390 Spring 2017 Rogelio Long
Pattern Programming Tools
Introduction to parallelism and the Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Message Passing Programming Based on MPI
Distributed Memory Programming with Message-Passing
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Derived Datatypes and Related Features Self Test with solution

Self Test 1.You are writing a parallel program to be run on 100 processors. Each processor is working with only one section of a skeleton outline of a 3-D model of a house. In the course of constructing the model house each processor often has to send the three cartesian coordinates (x,y,z) of nodes that are to be used to make the boundaries between the house sections. Each coordinate will be a real value. Why would it be advantageous for you to define a new data type called Point which contained the three coordinates?

Self Test a)My program will be more readable and self commenting. b)Since many x,y,z values will be used in MPI communication routines they can be sent as a single Point type entity instead of packing and unpacking three reals each time. c)Since all three values are real, there is no purpose in making a derived data type. d)It would be impossible to use MPI_Pack and MPI_Unpack to send the three real values.

Answer a)Incorrect. Actually this is a partially correct answer. Readablity *is* an advantage but perhaps a secondary one compared to the correct answer. b)Correct! If you find yourself repeatedly transferring a set of data, make it your own derived datatype. c)Incorrect. Matter of opinion. You could make a point array containing three reals, but that is an awkward way to handle the problem and does not take advantage of MPI capabilities. d)Incorrect. In fact MPI_Pack and MPI_Unpack are the most versatile routines for transferring any sort of heterogenous data.

Self Test 2.What is the simplest MPI derived datatype creation function to make the Point datatype described in problem 1? (Three of the answers given can actually be used to construct the type, but one is the simplest). a)MPI_TYPE_CONTIGUOUS b)MPI_TYPE_VECTOR c)MPI_TYPE_STRUCT d)MPI_TYPE_COMMIT

Answer a)Correct! It was designed just for this situation. b)Incorrect. Close, but do entire arrays make up your new data type? c)Incorrect. Close, but are you combining different data types in the new type? d)Incorrect. No. Commit just allows MPI to recognize the new type; it does not create it.

Self Test 3.The C syntax for the MPI_TYPE_CONTIGUOUS subroutine is MPI_Type_contiguous (count, oldtype, newtype) The argument names should be fairly self-explanatory, but if you want thier exact definition you can look them up at the MPI home page. For the derived datatype we have been discussing in the previous problems, what would be the values for the count, oldtype, and newtype arguments respectively? a)2, MPI_REAL, Point b)3, REAL, Point c)3, MPI_INTEGER, Coord: d)3, MPI_REAL, Point

Answer a)Incorrect. Close, but are we working in 2 or 3 dimensions? b)Incorrect. Sorry. You must use the MPI datatype MPI_REAL. c)Incorrect. Two errors. Go back and read Question 1 carefully. d)Correct! Good job.

Self Test 4.In Section “Using MPI Derived Types for User- Defined Types”, the code for creating the derived data type MPI_SparseElt is shown. Using the MPI_Extent function, determine the size (in bytes) of a variable of type MPI_SparseElt. You should probably modify the code found in that section, compile and run it to get the answer.

Answer #include struct SparseElt { /* representation of a sparse matrix element */ int location[2]; /* where the element belongs in the overall matrix */ double value; /* the value of the element */ }; struct SparseElt anElement; /* a representative variable of this type */ int main(int argc, char **argv) { int lena[2]; /* the three arrays used to describe an MPI derived type */ MPI_Aint loca[2]; /* their size reflects the number of components in SparseElt */ MPI_Datatype typa[2]; MPI_Aint baseaddress, extent; MPI_Datatype MPI_SparseElt; /* a variable to hold the MPI type indicator for SparseElt */ int err; err = MPI_Init(&argc, &argv);

Answer /* set up the MPI description of SparseElt */ MPI_Address(&anElement, &baseaddress); lena[0] = 2; MPI_Address(&anElement.location, &loca[0]); loca[0] -= baseaddress;/* find out the relative location */ typa[0] = MPI_INT; lena[1] = 1; MPI_Address(&anElement.value, &loca[1]); loca[1] -= baseaddress; typa[1] = MPI_DOUBLE; MPI_Type_struct(2, lena, loca, typa, &MPI_SparseElt); MPI_Type_commit(&MPI_SparseElt); MPI_Type_extent(MPI_SparseElt, &extent); printf("Extent: %d\n", extent); MPI_Type_free(&MPI_SparseElt); err = MPI_Finalize(); }

Answer The correct value for the extent will depend on the system on which you ran this exercise. On most systems, the extent is 16 bytes (4 for each of the two ints and 8 for the double), but other values are possible. For example, on a T3E, where the ints are 8 bytes each rather than 4, the extent is 24.

Course Problem

Description –The new problem still implements a parallel search of an integer array. –The program should find all occurrences of a certain integer which will be called the target. –It should then calculate the average of the target value and its index. –Both the target location and the average should be written to an output file. –In addition, the program should read both the target value and all the array elements from an input file.

Course Problem Exercise –Modify your code from Chapter 4 to create a program that solves the new Course Problem. –Use the techniques/routines of this chapter to make a new derived type called MPI_PAIR that will contain both the target location and the average. –All of the slave sends and the master receives must use the MPI_Pair type.

Solution #include #define N 300 int main(int argc, char **argv) { int i, target;/*local variables*/ int b[N], a[N/3];/*a is name of the array each slave searches*/ int rank, size, err; MPI_Status status; int end_cnt; int gi;/*global index*/ float ave;/*average*/ FILE *sourceFile; FILE *destinationFile; int blocklengths[2] = {1, 1};/* initialize blocklengths array */ MPI_Datatype types[2] = {MPI_INT, MPI_FLOAT};/* initialize types array */ MPI_Aint displacements[2]; MPI_Datatype MPI_Pair; err = MPI_Init(&argc, &argv); err = MPI_Comm_rank(MPI_COMM_WORLD, &rank); err = MPI_Comm_size(MPI_COMM_WORLD, &size);

Solution /* Initialize displacements array with */ err = MPI_Address(&gi, &displacements[0]); /* memory addresses */ err = MPI_Address(&ave, &displacements[1]); /* This routine creates the new data type MPI_Pair */ err = MPI_Type_struct(2, blocklengths, displacements, types, &MPI_Pair); /* This routine allows it to be used in communication */ err = MPI_Type_commit(&MPI_Pair); if(size != 4) { printf("Error: You must use 4 processes to run this program.\n"); return 1; } if (rank == 0) { /* File b.data has the target value on the first line */ /* The remaining 300 lines of b.data have the values for the b array */ sourceFile = fopen("b.data", "r"); /* File found.data will contain the indices of b where the target is */ destinationFile = fopen("found.data", "w"); if(sourceFile==NULL) { printf("Error: can't access file.c.\n"); return 1; } else if(destinationFile==NULL) { printf("Error: can't create file for writing.\n"); return 1;

Solution } else { /* Read in the target */ fscanf(sourceFile, "%d", &target); for (i=1; i<=3; i++) /*Notice how i is used as the destination process for each send*/ { err = MPI_Send(&target, 1, MPI_INT, i, 9, MPI_COMM_WORLD); } /* Read in b array */ for (i=0; i<N; i++) { fscanf(sourceFile,"%d", &b[i]); } err = MPI_Send(&b[0], 100, MPI_INT, 1, 11, MPI_COMM_WORLD); err = MPI_Send(&b[100], 100, MPI_INT, 2, 11, MPI_COMM_WORLD); err = MPI_Send(&b[200], 100, MPI_INT, 3, 11, MPI_COMM_WORLD); end_cnt = 0; while (end_cnt != 3) { err = MPI_Recv(MPI_BOTTOM, 1, MPI_Pair, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); if (status.MPI_TAG == 52) end_cnt++;/*See Comment*/ else fprintf(destinationFile,"P %d, %d %f\n", status.MPI_SOURCE, gi, ave); } fclose(sourceFile); fclose(destinationFile); }

Solution else { err = MPI_Recv(&target, 1, MPI_INT, 0, 9,MPI_COMM_WORLD,&status); err = MPI_Recv(a, 100, MPI_INT, 0, 11,MPI_COMM_WORLD,&status); /* Search the b array and output the target locations */ for (i=0; i<N/3; i++) { if (a[i] == target) { gi = (rank-1)*100+i+1; /*Equation to convert local index to global index*/ ave = (gi+target)/2.0; err = MPI_Send(MPI_BOTTOM, 1, MPI_Pair, 0, 19, MPI_COMM_WORLD); } gi = target; /* Both are fake values */ ave=3.45; /* The point of this send is the "end" tag (See Chapter 4) */ err = MPI_Send(MPI_BOTTOM, 1, MPI_Pair, 0, 52, MPI_COMM_WORLD); /*See Comment*/ } err = MPI_Type_free(&MPI_Pair); err = MPI_Finalize(); return 0; }

Solution Note: The sections of code shown in red are new lines added to create the new data type MPI_Pair. The sections of code shown in blue are lines where MPI_Pair is used. The results obtained from running this code are in the file found.data which contains the following: P 1, 62, 36. P 2, 183, 96.5 P 3, 271, P 3, 291, P 3, 296, 153. Notice that in this new parallel version the master outputs three items: the processor rank and the two parts of the MPI_Pair data type.