CS 591x Overview of MPI-2. Major Features of MPI-2 Superset of MPI-1 Parallel IO (previously discussed) Standard Process Startup Dynamic Process Management.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface Portable Parallel Programs.
Advertisements

Winter, 2004CSS490 MPI1 CSS490 Group Communication and MPI Textbook Ch3 Instructor: Munehiro Fukuda These slides were compiled from the course textbook,
Cross-site running on TeraGrid using MPICH-G2 Presented by Krishna Muriki (SDSC) on behalf of Dr. Nick Karonis (NIU)
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Parallel Programming in C with MPI and OpenMP
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
1 MPI-2: Extending the Message- Passing Interface Rusty Lusk Argonne National Laboratory.
1 MPI-2: Extending the Message- Passing Interface Bill Gropp Rusty Lusk Argonne National Laboratory.
Parallel Processing LAB NO 1.
Managed by UT-Battelle for the Department of Energy MPI for MultiCore and ManyCore Galen Shipman Oak Ridge National Laboratory June 4, 2008.
IBM Research © 2006 IBM Corporation CDT Static Analysis Features CDT Developer Summit - Ottawa Beth September.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
SDM Center February 2, 2005 Progress on MPI-IO Access to Mass Storage System Using a Storage Resource Manager Ekow J. Otoo, Arie Shoshani and Alex Sim.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
An Introduction to Parallel Programming and MPICH Nikolaos Hatzopoulos.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Hybrid MPI and OpenMP Parallel Programming
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
11/04/2010CS4961 CS4961 Parallel Programming Lecture 19: Message Passing, cont. Mary Hall November 4,
CS 591 x I/O in MPI. MPI exists as many different implementations MPI implementations are based on MPI standards MPI standards are developed and maintained.
Parallel Programming with MPI By, Santosh K Jena..
AEROSPACE DIVISION Optimised MPI for HPEC applications Benoit Guillon: Thales Research&Technology Gerard Cristau: Thales Computers Vincent Chuffart: Thales.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
Introduction to MPI Nischint Rajmohan 5 November 2007.
L17: MPI, cont. October 25, Final Project Purpose: -A chance to dig in deeper into a parallel programming model and explore concepts. -Research.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Implementing Processes and Threads CS550 Operating Systems.
MPI Groups, Communicators and Topologies. Groups and communicators In our case studies, we saw examples where collective communication needed to be performed.
Message Passing Interface Using resources from
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
L18: MPI, cont. November 10, Administrative Class cancelled, Tuesday, November 15 Guest Lecture, Thursday, November 17, Ganesh Gopalakrishnan CUDA.
MPI Basics.
Chapter 6 CS 3370 – C++ Functions.
Introduction to MPI.
MPI Message Passing Interface
CS4961 Parallel Programming Lecture 17: Message Passing, cont
CS 584.
Introduction to Message Passing Interface (MPI)
Parallel Processing - MPI
Message Passing Models
Quiz Questions Suzaku pattern programming framework
MPI MPI = Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Distributed Memory Programming with Message-Passing
Parallel Processing - MPI
MPI Message Passing Interface
Some codes for analysis and preparation for programming
CS 584 Lecture 8 Assignment?.
Presentation transcript:

CS 591x Overview of MPI-2

Major Features of MPI-2 Superset of MPI-1 Parallel IO (previously discussed) Standard Process Startup Dynamic Process Management Remote Memory Access

MPI-2 MPI-1 includes no specifications for a process executor Left to individual implementations usually “mpirun” even mpirun can vary across implementations  options, parameters, keywords can be different

MPI-2 MPI-2 includes a recommendation for a standard method to start MPI processes The result – mpiexec mpiexec arguments and parameters have standard meaning standard = portable

mpiexec arguments -n [numprocesses]  number of processes requested (like –n in mpirun)  mpiexec –n 12 myprog -soft [minprocesses]  start job with minprocesses processes if –n processes are not available  mpiexec –n 12 –soft 6 myprog

mpiexec arguments -soft [n:m]  a soft request can be a range  mpiexec –n 12 –soft 4:12 myprog -host [hostname]  requests execution on a specific host  mpiexec –n 4 –host node4 myprog -arch [archname]  start the job on a specific architecture

mpiexec arguments -file [filename]  requests job to run per specifications contained in filename  mpiexec –file specfile  supports the execution of multiple executables

Remote Memory Access Recall that in MPI-1- message passing is essentially a push operation the sender has to initiate the communications, or actively participate in the communication operation (collective communications) communications is symetrical

Remote Memory Access How would you handle a situation where: process x decides that it needs the value in variable a in process y… … and process y does not initiate an communication operation

Remote Memory Access MPI-2 has the answer… Remote Memory Access allows a process to initial and carryout an asymmetrical communications operation… …assuming the processes have setup the appropriate objects windows

Remote Memory Access int MPI_Win_create( void*var, MPI_Aintsize, intdisp_unit, MPI_Infoinfo, MPI_Commcomm, MPI_Win*win)

Remote Memory Access var – the variable to appear in the window size – the size of the var disp_units – displacement units info – key-value pairs to express “hints” to MPI- 2 on how to do the Win_create comm – the communicator that can share the window win – the name of the window object

Remote Memory Access int MPI_Win_fence( int assert, MPI_Win win) assert – usually 0 win- the name of the window

Remote Memory Access int MPI_Get( void* var, intcount, MPI_Datatypedatatype, inttarget_rank, MPI_Aintdisplacement, inttarget_count, MPI_Datatypetarget_datatype, MPI_Winwin)

Remote Memory Access int MPI_Win_Free( MPI_Win*win)

Remote Memory Access int MPI_Accumulate( void*var, intcount, MPI_Datatypedatatype, inttarget_rank, MPI_Aintdisplace, inttarget_count, MPI_Datatypetarget_datatype, MPI_Opoperation, MPI_Winwin)

Remote Memory Access MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &nprocs); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); if(myrank==0) { MPI_Win_create(&n, sizeof(int), 1, MPI_INFO_NULL, MPI_WORLD_COMM, &nwin); } else { MPI_Win_create(MPI_BOTTOM, 0, 1, MPI_INFO_NULL, MPI_WORLD_COMM, &nwin); } ………….

Remote Memory Access MPI_Win_fence(0, nwin); if (myrank != 0) MPI_Get(&n, 1, MPI_INT, 0, 0, 1, MPI_INT, nwin); MPI_Win_Fence(0, nwin);

Remote Memory Access BTW--- there is a MPI_Put also

Dymanic Process Management In MPI-1, recall that- process creation is static all processes in the job are created when the job initializes the number of processes in the job never vary as job execution progresses

Dynamic Process Management MPI-2 allow the creation of new processes within the application— called spawning helps to understand intercomms

Dynamic Process Creation int MPI_Comm_spawn( char*command, char*argv[], intmaxprocs, MPI_Infoinfo, introot, MPI_Comm*intercomm, int*errorcodes[]);

Dynamic Process Creation int MPI_Comm_get_parent( MPI_Comm* parent); retrieves communicators parent communicator

Dynamic Process Creation int MPI_Intercomm_merge( MPI_Commintercomm, inthigh, MPI_Commnew_intracomm)

Dynamic Process Creation … MPI_Init(&argc, &argv); makehostlist(argv[1], “targets”, &num_hosts); MPI_Info_create( &hostinfo); MPI_Info_set(hostinfo, “file”, “targets”); sprintf(soft_limit, “0:%d”, num_hosts); MPI_Info_set(hostinfo, “soft”, soft_limit); MPI_Comm_spawn(“pcp_slave”, MPI_ARGV_NULL, num_hosts, hostinfo, 0, MPI_COMM_SELF, &pcpslaves, MPI_ERRORCODES_IGNORE); MPI_Info_free( &hostinfo ); MPI_Intercomm_merge( pcpslaves, 0, &all_procs); ….

Dynamic Process Creation … // in spawned process MPI_Init( &argc, &argv ); MPI_Comm_get_parent( &slavecomm); MPI_Intercomm_merge( slavecomm, 1,&all_procs); … // now like intracomm…

Dynamic Process Creation – Multiple Executables int MPI_Comm_spawn_multiple( intcount, char*commands[], char*cmd_args[], int*maxprocs[], MPI_Infoinfo[], introot, MPI_Commcomm, MPI_Comm*intercomm, int*errors[])

Dynamic Process Creation- Multiple executables - sample char *array_of_commands[2] = {"ocean","atmos"}; char **array_of_argv[2]; char *argv0[] = {"-gridfile", "ocean1.grd", (char *)0}; char *argv1[] = {"atmos.grd", (char *)0}; array_of_argv[0] = argv0; array_of_argv[1] = argv1; MPI_Comm_spawn_multiple(2, array_of_commands, array_of_argv,...); from:

So, What about MPI-2 A lot of existing code in MPI-1 MPI-1 meets a lot of scientific and engineering computing needs MPI-2 implementation not as wide spread as MPI-1 MPI-2 is, at least in part, an experimental platform for research in parallel computing

MPI-2..for more information