CS 591x Overview of MPI-2
Major Features of MPI-2 Superset of MPI-1 Parallel IO (previously discussed) Standard Process Startup Dynamic Process Management Remote Memory Access
MPI-2 MPI-1 includes no specifications for a process executor Left to individual implementations usually “mpirun” even mpirun can vary across implementations options, parameters, keywords can be different
MPI-2 MPI-2 includes a recommendation for a standard method to start MPI processes The result – mpiexec mpiexec arguments and parameters have standard meaning standard = portable
mpiexec arguments -n [numprocesses] number of processes requested (like –n in mpirun) mpiexec –n 12 myprog -soft [minprocesses] start job with minprocesses processes if –n processes are not available mpiexec –n 12 –soft 6 myprog
mpiexec arguments -soft [n:m] a soft request can be a range mpiexec –n 12 –soft 4:12 myprog -host [hostname] requests execution on a specific host mpiexec –n 4 –host node4 myprog -arch [archname] start the job on a specific architecture
mpiexec arguments -file [filename] requests job to run per specifications contained in filename mpiexec –file specfile supports the execution of multiple executables
Remote Memory Access Recall that in MPI-1- message passing is essentially a push operation the sender has to initiate the communications, or actively participate in the communication operation (collective communications) communications is symetrical
Remote Memory Access How would you handle a situation where: process x decides that it needs the value in variable a in process y… … and process y does not initiate an communication operation
Remote Memory Access MPI-2 has the answer… Remote Memory Access allows a process to initial and carryout an asymmetrical communications operation… …assuming the processes have setup the appropriate objects windows
Remote Memory Access int MPI_Win_create( void*var, MPI_Aintsize, intdisp_unit, MPI_Infoinfo, MPI_Commcomm, MPI_Win*win)
Remote Memory Access var – the variable to appear in the window size – the size of the var disp_units – displacement units info – key-value pairs to express “hints” to MPI- 2 on how to do the Win_create comm – the communicator that can share the window win – the name of the window object
Remote Memory Access int MPI_Win_fence( int assert, MPI_Win win) assert – usually 0 win- the name of the window
Remote Memory Access int MPI_Get( void* var, intcount, MPI_Datatypedatatype, inttarget_rank, MPI_Aintdisplacement, inttarget_count, MPI_Datatypetarget_datatype, MPI_Winwin)
Remote Memory Access int MPI_Win_Free( MPI_Win*win)
Remote Memory Access int MPI_Accumulate( void*var, intcount, MPI_Datatypedatatype, inttarget_rank, MPI_Aintdisplace, inttarget_count, MPI_Datatypetarget_datatype, MPI_Opoperation, MPI_Winwin)
Remote Memory Access MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &nprocs); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); if(myrank==0) { MPI_Win_create(&n, sizeof(int), 1, MPI_INFO_NULL, MPI_WORLD_COMM, &nwin); } else { MPI_Win_create(MPI_BOTTOM, 0, 1, MPI_INFO_NULL, MPI_WORLD_COMM, &nwin); } ………….
Remote Memory Access MPI_Win_fence(0, nwin); if (myrank != 0) MPI_Get(&n, 1, MPI_INT, 0, 0, 1, MPI_INT, nwin); MPI_Win_Fence(0, nwin);
Remote Memory Access BTW--- there is a MPI_Put also
Dymanic Process Management In MPI-1, recall that- process creation is static all processes in the job are created when the job initializes the number of processes in the job never vary as job execution progresses
Dynamic Process Management MPI-2 allow the creation of new processes within the application— called spawning helps to understand intercomms
Dynamic Process Creation int MPI_Comm_spawn( char*command, char*argv[], intmaxprocs, MPI_Infoinfo, introot, MPI_Comm*intercomm, int*errorcodes[]);
Dynamic Process Creation int MPI_Comm_get_parent( MPI_Comm* parent); retrieves communicators parent communicator
Dynamic Process Creation int MPI_Intercomm_merge( MPI_Commintercomm, inthigh, MPI_Commnew_intracomm)
Dynamic Process Creation … MPI_Init(&argc, &argv); makehostlist(argv[1], “targets”, &num_hosts); MPI_Info_create( &hostinfo); MPI_Info_set(hostinfo, “file”, “targets”); sprintf(soft_limit, “0:%d”, num_hosts); MPI_Info_set(hostinfo, “soft”, soft_limit); MPI_Comm_spawn(“pcp_slave”, MPI_ARGV_NULL, num_hosts, hostinfo, 0, MPI_COMM_SELF, &pcpslaves, MPI_ERRORCODES_IGNORE); MPI_Info_free( &hostinfo ); MPI_Intercomm_merge( pcpslaves, 0, &all_procs); ….
Dynamic Process Creation … // in spawned process MPI_Init( &argc, &argv ); MPI_Comm_get_parent( &slavecomm); MPI_Intercomm_merge( slavecomm, 1,&all_procs); … // now like intracomm…
Dynamic Process Creation – Multiple Executables int MPI_Comm_spawn_multiple( intcount, char*commands[], char*cmd_args[], int*maxprocs[], MPI_Infoinfo[], introot, MPI_Commcomm, MPI_Comm*intercomm, int*errors[])
Dynamic Process Creation- Multiple executables - sample char *array_of_commands[2] = {"ocean","atmos"}; char **array_of_argv[2]; char *argv0[] = {"-gridfile", "ocean1.grd", (char *)0}; char *argv1[] = {"atmos.grd", (char *)0}; array_of_argv[0] = argv0; array_of_argv[1] = argv1; MPI_Comm_spawn_multiple(2, array_of_commands, array_of_argv,...); from:
So, What about MPI-2 A lot of existing code in MPI-1 MPI-1 meets a lot of scientific and engineering computing needs MPI-2 implementation not as wide spread as MPI-1 MPI-2 is, at least in part, an experimental platform for research in parallel computing
MPI-2..for more information