Download presentation
Presentation is loading. Please wait.
1
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008
2
12b.2 Basics of MPI
3
12b.3 MPI Startup and Cleanup Execution begins with a single processor Multiple threads are started on multiple processors by spawning using the MPI_Init() function At the end of the program, all processors should call the MPI_Finalize() function to kill threads and cleanup
4
12b.4 MPI Startup and Cleanup (continued) #include main (int argc, char *argv[]) { MPI_Init(&argc, &argv);. MPI_Finalize(); }... t 0 on P 0 t 1 on P 1 t n-1 on P n-1 t 0 on P 0 Instructions between the Init and Finalize are executed by all threads
5
12b.5 Compiling and Running an MPI program To compile an MPI program: mpicc myprogram.c -o myprogram To run an MPI program: mpirun -nolocal -np 4 myprogram [your program arguments]
6
12b.6 Who am I? The threads are given ids from 0 to P-1, where P is the number of processors given on the mpirun command line Two useful functions are: –MPI_Comm_rank() – get the current thread's id –MPI_Comm_size() – get the number of processors
7
12b.7 Who am I? (continued) #include main (int argc, char *argv[]) { int mypid, size; char name[BUFSIZ]; MPI_Init(&argc, &argv); MPI_Comm_rank (MPI_COMM_WORLD, &mypid); MPI_Comm_size (MPI_COMM_WORLD, &size); gethostname(name, BUFSIZ); printf (“I am thread %d running on %s\n”, mypid, name); if (mypid == 0) { printf (“There are %d threads\n”, size); } MPI_Finalize(); }
8
12b.8 One-to-one Communication Sending a message: –MPI_Send(buffer, count, datatype, destination, tag, communicator) Receiving a message: –MPI_Recv(buffer, count, datatype, source, tag, communicator, status)
9
12b.9 One-to-one Communication (continued) where: –buffer is the address of the data (put an “&” in front of a scalar variable, but not an array variable) –count is the number of data items –destination and source are the thread ids of the destination and source threads –tag is a a user-defined message tag (required when multiple message are sent between the same pair of processors)
10
12b.10 One-to-one Communication (continued) where: –A communicator is a communication domain. We will only use MPI_COMM_WORLD, which includes all processors. –datatype tells what type of data is in the buffer. E.g.: MPI_CHAR, MPI_INT, MPI_FLOAT, MPI_DOUBLE, MPI_PACKED
11
12b.11 Hello World with Communication #include main (int argc, char *argv[]) { int my_rank; int p; char message[BUFSIZ]; char machinename[BUFSIZ]; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_size (MPI_COMM_WORLD, &p); MPI_Comm_rank (MPI_COMM_WORLD, &my_rank);...
12
12b.12 Hello World with Communication (continued)... if (my_rank == 0) { // This is done by the master gethostname(machinename, MACH_NAME_LEN); printf ("Master thread is ready, running on %s\n", machinename); for (source = 1; source < p; source++) { MPI_Recv(message, BUFSIZ, MPI_CHAR, source, 0 MPI_COMM_WORLD, &status); printf ("%s\n", message); }...
13
12b.13 Hello World with Communication (continued)... // All threads except the master will send a "hello" else { gethostname(machinename, BUFSIZ); sprintf(message, "Greetings from thread %d, running on %s!", my_rank, machinename); MPI_Send(message, strlen(message)+1, MPI_CHAR, 0, 0, MPI_COMM_WORLD); } MPI_Finalize(); }
14
12b.14 Hello World Results $ mpirun -nolocal -np 8 hello Master thread is ready, running on compute-0-1.local Greetings from thread 1, running on compute-0-1.local! Greetings from thread 2, running on compute-0-1.local! Greetings from thread 3, running on compute-0-1.local! Greetings from thread 4, running on compute-0-2.local! Greetings from thread 5, running on compute-0-2.local! Greetings from thread 6, running on compute-0-2.local! Greetings from thread 7, running on compute-0-2.local!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.