Download presentation
Presentation is loading. Please wait.
Published byScot Lee Modified over 9 years ago
1
CS 591x Overview of MPI-2
2
Major Features of MPI-2 Superset of MPI-1 Parallel IO (previously discussed) Standard Process Startup Dynamic Process Management Remote Memory Access
3
MPI-2 MPI-1 includes no specifications for a process executor Left to individual implementations usually “mpirun” even mpirun can vary across implementations options, parameters, keywords can be different
4
MPI-2 MPI-2 includes a recommendation for a standard method to start MPI processes The result – mpiexec mpiexec arguments and parameters have standard meaning standard = portable
5
mpiexec arguments -n [numprocesses] number of processes requested (like –n in mpirun) mpiexec –n 12 myprog -soft [minprocesses] start job with minprocesses processes if –n processes are not available mpiexec –n 12 –soft 6 myprog
6
mpiexec arguments -soft [n:m] a soft request can be a range mpiexec –n 12 –soft 4:12 myprog -host [hostname] requests execution on a specific host mpiexec –n 4 –host node4 myprog -arch [archname] start the job on a specific architecture
7
mpiexec arguments -file [filename] requests job to run per specifications contained in filename mpiexec –file specfile supports the execution of multiple executables
8
Remote Memory Access Recall that in MPI-1- message passing is essentially a push operation the sender has to initiate the communications, or actively participate in the communication operation (collective communications) communications is symetrical
9
Remote Memory Access How would you handle a situation where: process x decides that it needs the value in variable a in process y… … and process y does not initiate an communication operation
10
Remote Memory Access MPI-2 has the answer… Remote Memory Access allows a process to initial and carryout an asymmetrical communications operation… …assuming the processes have setup the appropriate objects windows
11
Remote Memory Access int MPI_Win_create( void*var, MPI_Aintsize, intdisp_unit, MPI_Infoinfo, MPI_Commcomm, MPI_Win*win)
12
Remote Memory Access var – the variable to appear in the window size – the size of the var disp_units – displacement units info – key-value pairs to express “hints” to MPI- 2 on how to do the Win_create comm – the communicator that can share the window win – the name of the window object
13
Remote Memory Access int MPI_Win_fence( int assert, MPI_Win win) assert – usually 0 win- the name of the window
14
Remote Memory Access int MPI_Get( void* var, intcount, MPI_Datatypedatatype, inttarget_rank, MPI_Aintdisplacement, inttarget_count, MPI_Datatypetarget_datatype, MPI_Winwin)
15
Remote Memory Access int MPI_Win_Free( MPI_Win*win)
16
Remote Memory Access int MPI_Accumulate( void*var, intcount, MPI_Datatypedatatype, inttarget_rank, MPI_Aintdisplace, inttarget_count, MPI_Datatypetarget_datatype, MPI_Opoperation, MPI_Winwin)
17
Remote Memory Access MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &nprocs); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); if(myrank==0) { MPI_Win_create(&n, sizeof(int), 1, MPI_INFO_NULL, MPI_WORLD_COMM, &nwin); } else { MPI_Win_create(MPI_BOTTOM, 0, 1, MPI_INFO_NULL, MPI_WORLD_COMM, &nwin); } ………….
18
Remote Memory Access MPI_Win_fence(0, nwin); if (myrank != 0) MPI_Get(&n, 1, MPI_INT, 0, 0, 1, MPI_INT, nwin); MPI_Win_Fence(0, nwin);
19
Remote Memory Access BTW--- there is a MPI_Put also
20
Dymanic Process Management In MPI-1, recall that- process creation is static all processes in the job are created when the job initializes the number of processes in the job never vary as job execution progresses
21
Dynamic Process Management MPI-2 allow the creation of new processes within the application— called spawning helps to understand intercomms
22
Dynamic Process Creation int MPI_Comm_spawn( char*command, char*argv[], intmaxprocs, MPI_Infoinfo, introot, MPI_Comm*intercomm, int*errorcodes[]);
23
Dynamic Process Creation int MPI_Comm_get_parent( MPI_Comm* parent); retrieves communicators parent communicator
24
Dynamic Process Creation int MPI_Intercomm_merge( MPI_Commintercomm, inthigh, MPI_Commnew_intracomm)
25
Dynamic Process Creation … MPI_Init(&argc, &argv); makehostlist(argv[1], “targets”, &num_hosts); MPI_Info_create( &hostinfo); MPI_Info_set(hostinfo, “file”, “targets”); sprintf(soft_limit, “0:%d”, num_hosts); MPI_Info_set(hostinfo, “soft”, soft_limit); MPI_Comm_spawn(“pcp_slave”, MPI_ARGV_NULL, num_hosts, hostinfo, 0, MPI_COMM_SELF, &pcpslaves, MPI_ERRORCODES_IGNORE); MPI_Info_free( &hostinfo ); MPI_Intercomm_merge( pcpslaves, 0, &all_procs); ….
26
Dynamic Process Creation … // in spawned process MPI_Init( &argc, &argv ); MPI_Comm_get_parent( &slavecomm); MPI_Intercomm_merge( slavecomm, 1,&all_procs); … // now like intracomm…
27
Dynamic Process Creation – Multiple Executables int MPI_Comm_spawn_multiple( intcount, char*commands[], char*cmd_args[], int*maxprocs[], MPI_Infoinfo[], introot, MPI_Commcomm, MPI_Comm*intercomm, int*errors[])
28
Dynamic Process Creation- Multiple executables - sample char *array_of_commands[2] = {"ocean","atmos"}; char **array_of_argv[2]; char *argv0[] = {"-gridfile", "ocean1.grd", (char *)0}; char *argv1[] = {"atmos.grd", (char *)0}; array_of_argv[0] = argv0; array_of_argv[1] = argv1; MPI_Comm_spawn_multiple(2, array_of_commands, array_of_argv,...); from:http://www.epcc.ed.ac.uk/epcc-tec/document_archive/mpi-20-htm
29
So, What about MPI-2 A lot of existing code in MPI-1 MPI-1 meets a lot of scientific and engineering computing needs MPI-2 implementation not as wide spread as MPI-1 MPI-2 is, at least in part, an experimental platform for research in parallel computing
30
MPI-2..for more information http://www.mpi-forum.org/docs/
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.