Download presentation
Presentation is loading. Please wait.
Published byCharlotte Conley Modified over 9 years ago
1
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2012. Jan 14, 2013.
2
2.2 Software Tools for Clusters Late 1980’sParallel Virtual Machine (PVM) - developed Became very popular. Mid 1990’s -Message-Passing Interface (MPI) - standard defined. Based upon Message Passing Parallel Programming model. Both provide a set of user-level libraries for message passing. Use with sequential programming languages (C, C++,...).
3
2.3 MPI (Message Passing Interface) Message passing library standard developed by group of academics and industrial partners to foster more widespread use and portability. Defines routines, not implementation. Several free implementations exist.
4
2.4 Message passing concept using library routines
5
2.5 Message routing between computers typically done by daemon processes installed on computers that form the “virtual machine”. Application daemon process program Workstation Application program Application program Workstation Workstation Messages sent through network (executable) (executable) (executable). Can have more than one process running on each computer.
6
mpd daemon processes (in MPICH implementation of MPI used on our cluster) For MPI programs to operate between servers, mpi “mpd” daemon processes must be running on each server to form a ring. Execute the command: mpdtrace or mpdtrace -l which will list those servers where mpd is running. 2.6
7
2.7 Message-Passing Programming using User-level Message-Passing Libraries Two primary mechanisms needed: 1.A method of creating processes for execution on different computers 2.A method of sending and receiving messages
8
Creating processes on different computers 2.8
9
2.9 Multiple program, multiple data (MPMD) model Source file Executable Processor 0Processorp - 1 Compile to suit processor Source file Different programs executed by each processor
10
2.10 Single Program Multiple Data (SPMD) model Source file Executables Processor 0Processorp - 1 Compile to suit processor Basic MPI way Same program executed by each processor Control statements select different parts for each processor to execute.
11
Static process creation All executables started together. Done when one starts the compiled programs. Normal MPI way. Possible to dynamically start processes from within an executing process (fork) in MPI-2 with MPI_Comm_Spawn(), which might find applicability if do not initially how many processes needed. 2.11
12
MPI program structure int main(int argc, char **argv) { MPI_Init(&argc, &argv); // Code executed by all processes MPI_Finalize(); } 12 Takes command line arguments See later about executing code
13
How is the number of processes determined? When you run your MPI program, you can specify how many processes you want on the command line: mpirun –np 8 –np option tells mpirun to run your parallel program using the specified number of processes. 13 Slide based slide from C. Ferner, UNC-W
14
In MPI, processes within a defined “communicating group” given a number called a rank starting from zero onwards. Program uses control constructs, typically IF statements, to direct processes to perform specific actions. Example if (rank == 0).../* do this */; if (rank == 1).../* do this */;. 2.14
15
Master-Slave approach Usually computation constructed as a master-slave model One process (the master), performs one set of actions and all the other processes (the slaves) perform identical actions although on different data, i.e. if (rank == 0).../* master do this */; else... /* all slaves do this */; 2.15
16
Methods of sending and receiving messages 2.16
17
2.17 Basic “point-to-point” Send and Receive Routines Process 1Process 2 send(&x, 2); recv(&y, 1); xy Movement of data Generic syntax (actual formats later) Passing a message between processes using send() and recv() library calls:
18
2.18 MPI point-to-point message passing using MPI_send() and MPI_recv() library calls
19
Semantics of MPI_Send() and MPI_Recv() Called blocking, which means in MPI that routine waits until all its local actions have taken place before returning. After returning, any local variables used can be altered without affecting message transfer. MPI_Send() - Message may not reached its destination but process can continue in the knowledge that message safely on its way. MPI_Recv() – Returns when message received and data collected. Will cause process to stall until message received. Other versions of MPI_Send() and MPI_Recv() have different semantics. 2.19
20
2.20 Message Tag Used to differentiate between different types of messages being sent. Message tag is carried within message. If special type matching is not required, a wild card message tag used. Then recv() will match with any send().
21
2.21 Message Tag Example Process 1Process 2 send(&x,2,5); recv(&y,1,5); xy Movement of data Waits for a message from process 1 with a tag of 5 To send a message, x, with message tag 5 from a source process, 1, to a destination process, 2, and assign to y:
22
2.22 Unsafe message passing - Example lib() send(…,1,…); recv(…,0,…); Process 0Process 1 send(…,1,…); recv(…,0,…); (a) Intended behavior (b) Possible behavior lib() send(…,1,…); recv(…,0,…); Process 0Process 1 send(…,1,…); recv(…,0,…); Destination Source
23
2.23 MPI Solution “Communicators” Defines a communication domain - a set of processes that are allowed to communicate between themselves. Communication domains of libraries can be separated from that of a user program. Used in all point-to-point and collective MPI message-passing communications. Note: Intracommunicator – for communicating within a single group of processes. Intercommunicator - for communicating within two or more groups of processes
24
2.24 Default Communicator MPI_COMM_WORLD Exists as first communicator for all processes existing in the application. A set of MPI routines exists for forming communicators. Processes have a “rank” in a communicator.
25
2.25 Using SPMD Computational Model main (int argc, char *argv[]) { MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find rank */ if (myrank == 0) master(); else slave(); MPI_Finalize(); } where master() and slave() are to be executed by master process and slave process, respectively.
26
2.26 Parameters of blocking send MPI_Send(buf, count, datatype, dest, tag, comm) Address of Number of items Datatype of Rank of destination Message tag Communicator send buffer to send each item process
27
2.27 Parameters of blocking receive MPI_Recv(buf, count, datatype, src, tag, comm, status) Address of Maximum number Datatype of Rank of source Message tag Communicator receive buffer of items to receive each item process Status after operation
28
MPI Datatypes (defined in mpi.h) MPI datatypes MPI_BYTE MPI_PACKED MPI_CHAR MPI_SHORT MPI_INT MPI_LONG MPI_FLOAT MPI_DOUBLE MPI_LONG_DOUBLE MPI_UNSIGNED_CHAR Slide from C. Ferner, UNC-W
29
Any source or tag In MPI_Recv, source can be MPI_ANY_SOURCE Tag can be MPI_ANY_TAG Cause the Recv to take any message destined for current process regardless of source and/or regardless of tag. Example MPI_Recv(message, 256, MPI_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); 29 Slide based upon slide from C. Ferner, UNC-W
30
2.30 Program Examples To send an integer x from process 0 to process 1, MPI_Comm_rank(MPI_COMM_WORLD,&myrank); /* find rank */ if (myrank == 0) { int x; MPI_Send(&x, 1, MPI_INT, 1, msgtag, MPI_COMM_WORLD); } else if (myrank == 1) { int x; MPI_Recv(&x, 1, MPI_INT, 0,msgtag,MPI_COMM_WORLD,status); }
31
Sample MPI Hello World program #include #include "mpi.h" main(int argc, char **argv ) { char message[20]; int i,rank, size, type=99; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD,&size); MPI_Comm_rank(MPI_COMM_WORLD,&rank); if(rank == 0) { strcpy(message, "Hello, world"); for (i=1; i<size; i++) MPI_Send(message,13,MPI_CHAR,i,type,MPI_COMM_WORLD); } else MPI_Recv(message,20,MPI_CHAR,0,type,MPI_COMM_WORLD,&status); printf( "Message from process =%d : %.13s\n", rank,message); MPI_Finalize(); } 2.31
32
Program sends message “Hello World” from master process (rank = 0) to each of the other processes (rank != 0). Then, all processes execute a println statement. In MPI, standard output automatically redirected from remote computers to the user’s console so final result will be Message from process =1 : Hello, world Message from process =0 : Hello, world Message from process =2 : Hello, world Message from process =3 : Hello, world... except that the order of messages might be different but is unlikely to be in ascending order of process ID; it will depend upon how the processes are scheduled. 2.32
33
Another Example (array) int array[100]; … // rank 0 fills the array with data if (rank == 0) MPI_Send (array, 100, MPI_INT, 1, 0, MPI_COMM_WORLD); else if (rank == 1) MPI_Recv(array, 100, MPI_INT, 0, 0, MPI_COMM_WORLD, &status); dest source tag Number of Elements 33 Slide based upon slide from C. Ferner, UNC-W
34
Another Example (Ring) Each process (excepts the master) receives a token from the process with rank 1 less than its own rank. Then each process increments the token and sends it to the next process (with rank 1 more than its own). The last process sends the token to the master 34 Slide based upon slides from C. Ferner, UNC-W 0 17 26 35 4
35
Another Example (Ring) #include int main (int argc, char *argv[]) { int token, NP, myrank; MPI_Status status; MPI_Init (&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &NP); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); 35
36
Another Example (Ring) if (myrank != 0) { // Everyone except master receives from processor 1 less // than its own rank. MPI_Recv(&token, 1, MPI_INT, myrank - 1, 0, MPI_COMM_WORLD, &status); printf("Process %d received token %d from process %d\n", myrank, token, myrank - 1); 36
37
Another Example (Ring) } else { // Master sets initial value before sending. token = -1; } token += 2; MPI_Send(&token, 1, MPI_INT, (myrank + 1) % NP, 0, MPI_COMM_WORLD); 9/4/2012 37
38
Another Example (Ring) // Now process 0 can receive from the last process. if (myrank == 0) { MPI_Recv(&token, 1, MPI_INT, NP - 1, 0, MPI_COMM_WORLD, &status); printf("Process %d received token %d from process %d\n", myrank, token, NP - 1); } MPI_Finalize(); } 38
39
Results (Ring) Process 1 received token 1 from process 0 Process 2 received token 3 from process 1 Process 3 received token 5 from process 2 Process 4 received token 7 from process 3 Process 5 received token 9 from process 4 Process 6 received token 11 from process 5 Process 7 received token 13 from process 6 Process 0 received token 15 from process 7 39
40
Setting Up the Message Passing Environment Usually computers specified in a file, called a hostfile or machines file. File contains names of computers and possibly number of processes that should run on each computer. Implementation-specific algorithm selects computers from list to run user programs. 2.40
41
Users may create their own machines file for their program. Example coit-grid01.uncc.edu coit-grid02.uncc.edu coit-grid03.uncc.edu coit-grid04.uncc.edu coit-grid05.uncc.edu If a machines file not specified, a default machines file used or it may be that program will only run on a single computer. 2.41 Note for our cluster, one uses local names, see assignment instructions
42
2.42 Compiling/Executing MPI Programs Minor differences in the command lines required depending upon MPI implementation. For the assignments, we will use MPICH or MPICH-2. Generally, a machines file need to be present that lists all the computers to be used. MPI then uses those computers listed. Otherwise it will simply run on one computer
43
2.43 MPICH Commands Two basic commands: mpicc, a script to compile MPI programs mpiexec - MPI-2 standard command * * mpiexec replaces earlier mpirun comamnd although mpirun still exists.)
44
2.44 Compiling/executing (SPMD) MPI program For MPICH. At a command line: To start MPI: Nothing special. (Make sure mpd daemons running) To compile MPI programs: for C mpicc -o prog prog.c for C++ mpiCC -o prog prog.cpp To execute MPI program: mpiexec -n no_procs prog A positive integer
45
2.45 Executing MPICH program on multiple computers Create a file called say “machines” containing the list of machines, say: coit-grid01.uncc.edu coit-grid02.uncc.edu coit-grid03.uncc.edu coit-grid04.uncc.edu coit-grid05.uncc.edu Note for our cluster, one uses local names, see assignment instructions
46
2.46 mpiexec -machinefile machines -n 4 prog would run prog with four processes. Each processes would execute on one of machines in list. MPI would cycle through list of machines giving processes to machines. Can also specify number of processes on a particular machine by adding that number after machine name.)
47
2.47 Measuring Execution Time MPI provides the routine MPI_Wtime() for returning time (in seconds) from some pint in the past. To measure execution time between point L1 and point L2 in code, might have construction such as: double start_time, end_time, exe_time; L1: start_time = MPI_Wtime();. L2: end_time = MPI_Wtime(); exe_time = end_time - start_time;.
48
2.48 Using C time routines To measure execution time between point L1 and point L2 in code, might have construction such as:. L1: time(&t1); /* start timer */. L2: time(&t2);/* stop timer */. elapsed_Time = difftime(t2, t1); /*time=t2-t1*/ printf(“Elapsed time=%5.2f secs”,elapsed_Time);
49
gettimeofday() #include double elapsed_time; struct timeval tv1, tv2; gettimeofday(&tv1, NULL); … gettimeofday(&tv2, NULL); elapsed_time = (tv2.tv_sec - tv1.tv_sec) + ((tv2.tv_usec - tv1.tv_usec) / 1000000.0); Measure time to execute this section 49 Using time() or timeofday() routines may be useful if you want to compare with a sequential C version of the program with same libraries.
50
2.50 Visualization Tools Programs can be watched as they are executed in a space-time diagram (or process-time diagram): Process 1 Process 2 Process 3 Time Computing Waiting Message-passing system routine Message Visualization tools available for MPI, e.g., Upshot.
51
Eclipse IDE PTP Parallel Tools A recent version of Eclipse IDE that supports development of parallel programs (MPI, OpenMP) We are still evaluating this but looks really good 2.51
52
Eclipse - PTP 2.52 http://download.eclipse.org/tools/ptp/docs/ptp-sc11-slides-final.pdf
53
2.53 Next topic Discussion of first assignment – To write and execute some simple MPI programs. – Will include timing execution.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.