Message Passing Libraries

Slides:



Advertisements
Similar presentations
Parallel Virtual Machine Rama Vykunta. Introduction n PVM provides a unified frame work for developing parallel programs with the existing infrastructure.
Advertisements

MPI Message Passing Interface Portable Parallel Programs.
MPI Message Passing Interface
Winter, 2004CSS490 MPI1 CSS490 Group Communication and MPI Textbook Ch3 Instructor: Munehiro Fukuda These slides were compiled from the course textbook,
Company LOGO Parallel Virtual Machine Issued by: Ameer Mosa Al_Saadi 1 University of Technology Computer Engineering and Information Technology Department.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
Introduction to PVM PVM (Parallel Virtual Machine) is a package of libraries and runtime daemons that enables building parallel apps easily and efficiently.
20101 Chapter 7 The Application Layer Message Passing.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
Parallel Programming with Java
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
1 Lecture 4: Distributed-memory Computing with PVM/MPI.
Chapter 9 Message Passing Copyright © Operating Systems, by Dhananjay Dhamdhere Copyright © Operating Systems, by Dhananjay Dhamdhere2 Introduction.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
PVM and MPI What is more preferable? Comparative analysis of PVM and MPI for the development of physical applications on parallel clusters Ekaterina Elts.
PVM. PVM - What Is It? F Stands for: Parallel Virtual Machine F A software tool used to create and execute concurrent or parallel applications. F Operates.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
HPCA2001HPCA Message Passing Interface (MPI) and Parallel Algorithm Design.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 The University of Southern Mississippi April 8, 2010.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
MPI Communications Point to Point Collective Communication Data Packaging.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 11.
Parallel Programming with MPI By, Santosh K Jena..
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Parallel and Distributed Programming Kashif Bilal.
PVM (Parallel Virtual Machine)‏ By : Vishal Prajapati Course CS683 Computer Architecture Prof. Moreshwar R Bhujade.
PVM: Parallel Virtual Machine anonymous ftp ftp ftp.netlib.org cd pvm3/book get pvm-book.ps quit
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
3-D Graphics Rendering Using PVM CLUSTERS Arjun Jain & Harish G. Naik R. V. College of Engineering, Bangalore.
Parallel Programming with PVM Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
Lecture 5: Parallel Virtual Machine (PVM). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Message Passing Interface Using resources from
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 12.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 6, 2006 Session 22.
PVM and MPI.
Lecture 4: Distributed-memory Computing with PVM/MPI
Parallel Virtual Machine
Prabhaker Mateti Wright State University
CS4402 – Parallel Computing
MPI Message Passing Interface
Subject Name: OPERATING SYSTEMS Subject Code: 10EC65
CS 584.
MPI-Message Passing Interface
Message Passing Models
Distributed Systems CS
A Message Passing Standard for MPP and Workstations
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
Message-Passing Computing
Introduction to parallelism and the Message Passing Interface
MPJ: A Java-based Parallel Computing System
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Message Passing Libraries Abdelghani Bellaachia Computer Science Department George Washington University Washington, DC 20052

Objectives Large scientific applications scale to 100’s of processors (routinely) and 1000’s of processors (in rare cases) Climate/Ocean modeling Molecular physics (QCD, dynamics, materials, …) Computational Fluid Dynamics And many more … To create concurrent and parallel applications. A message-passing library to explicitly tell each processor what to do and provides a mechanism to transfer data between processes.

Massage Passing Approach Large parallel programs need well-defined mechanisms to coordinate and exchange info Communication accomplished by message passing Message passing allows two processes to: Exchange information Synchronize with each other

Hello World – MP Style Process A Initialize Send(B, “Hello World”) Recv(B, String) Print String “Hi There” Finalize Process B Initialize Recv(A, String) Print String “Hello World” Send(A, “Hi There”) Finalize

PVM Runs on a variety of Unix machines Local and wide-area or a combination of networks The application decides where and when its components are executed. The application determines its own control and dependency structure. Applications can be written in C, Fortran, and Java Components: PVM daemon process (pvmd3) Library interface routines for Fortran and C Libpvm3.a, libfpvm3.a, libgpvm3.a PVM style packing and unpacking data is generally avoided by the use of an MPI datatype being defined.

PVM daemon (pvmd3) A process which oversees the operation of user processes within a PVM application and coordinates inter-machine PVM communications One daemon runs on each machine configured into your parallel virtual machine "Master (local) - remote" control scheme for daemons Each daemon maintains a table of configuration and handles information relative to your parallel virtual machine Processes communicate with each other through the daemons They talk to their local daemon via the library interface routines Local daemon then sends/receives messages to/from remote host daemons

PVM Libraries libpvm3.a : Library of C-language interface routines libfpvm3.a : Additional library for Fortran codes libgpvm3.a : Required for use with dynamic groups (hosts can be added or deleted in user defined groups at any time.)

Typical Subroutine Calls initiates and terminate processes pack, send and receive messages synchronize via barriers query and dynamically change configuration of the parallel virtual machine

Application Requirements Network connected computers PVM daemon: Built for each architecture and installed on each machine PVM libraries: Built and installed for each architecture Your application specific files program components PVM hostfile (defines which physical machines comprise your virtual parallel machine Other libraries required by your program

Process Creation C: numt = pvm_spawn (task, argv, flag, where, ntask, tids) where: task = the character name of the executable file flag = integer specifying spawning options flag=PVMDEFAULT lets PVM choose processors to start process argv = pointer to arguments for executable where = used to choose processors, Not used if flag set to PVMDEFAULT ntasks = number of copies of the executable to start tids = integer number of process returned by routine numt = number of processes started, if < 0 error

Process termination Tells the pvmd that the process is leaving PVM In C: info pvm_exit () where info is the integer status code returned from routine (values < 0 indicates error )

Sending a Message Initialize a buffer to use for sending Pack various types of data into that buffer Send the buffer contents to a designated location or locations Initialize a buffer to use for sending: Clears the send buffer and prepares it for packing a new message In C: init bufid = pvm_initsend (int encoding) Where: encoding is the encoding scheme name: PVMDataDefault used for data conversion between different architectures.

Sending a Message Pack various types of data into that buffer pvm_pkdatatype (datapointer, numberofitems, stride) where: int info = pvm_pkint ( int *ip, int nitem,int stride ) int info = pvm_pkbyte ( char *xp, int nitem,int stride ) int info = pvm_pkcplx ( float *cp, int nitem,int stride ) int info = pvm_pkdouble ( double *db, int nitem,int stride ) int info = pvm_pkfloat ( float *fp, int nitem,int stride ) int info = pvm_pklong ( long *ip, int nitem,int stride ) int info = pvm_pkshort ( short *jp, int nitem,int stride ) int info = pvm_pkpkstr ( char *sp )

Sending a Message Send the buffer contents to a designated location or locations: In C: int info = pvm_send (int tid, int msgtag) where tid = integer number of receiving process msgtag = message tag (can use 1 if do not care to use message tags) info = integer status code returned from routine (value < 0 indicates error)

Receiving A Message When receiving a message there are 2 options: blocking - wait for a message (pvmrecv) Non blocking - get a message only if its pending otherwise continue on (pvmnrecv) Only Blocking Receives will be discussed Receive a message into a buffer Unpack the buffer into a variable

Receiving A Message Receive a message into a buffer In C: int bufid = pvm_recv (int tid, int msgtag ) where tid = integer number of process that sent the message msgtag = message tag (can use 1 if do not care to use message tags) bufid = integer value of new active receive buffer indentifier integer value < 0 indicates error

Receiving A Message Unpack buffer Unpack in same order that message was packed In C: int info = pvmupkdatatype (datapointer, numberofitems, stride) see pack for variable definitions

PVM Collective Communications pvm_bcast: Asynchronously broadcasts the data in the active send buffer to a group of processes. The broadcast message is not sent back to the sender. pvm_gather: A specified member receives messages from each member of the group and gathers these messages into a single array. All group members must call pvm_gather(). pvm_scatter: Performs a scatter of data from the specified root to each of the members of the group, including itself. All group members must call pvm_scatter(). Each receives a portion of the data array from the root in their local result array. pvm_reduce: Performs a reduce operation over members of the group. All group members call it with their local data, and the result of the reduction operation appears on the root. Users can define their own reduction functions or the predefined PVM reductions

Example: Master-Slave Architecture Writing a Simple Master-Slave Code Master code will send the message "HELLO WORLD" to the workers Worker code will receive the message and print it. #include <stdio.h> #include “pvm3.h” #define NTASKS 6 #define HELLO_MSGTYPE 1 char helloworld[13] = "HELLO WORLD!"; main() { int mytid, tids[NTASKS], i, msgtype, rc, bufid; /* char helloworld[13] = "HELLO WORLD!"; */ for (i=0; i<NTASKS; i++) tids[i] = 0; printf("Enrolling master task in PVM...\n"); mytid = pvm_mytid(); bufid = pvm_catchout(stdout); printf("Spawning worker tasks ...\n"); for (i=0; i<NTASKS; i++) { rc = pvm_spawn("hello.worker", NULL, PvmTaskDefault, "", 1, &tids[i]); printf(" spawned worker task id = %8x\n", tids[i]); } printf("Sending message to all worker tasks...\n"); msgtype = HELLO_MSGTYPE; rc = pvm_initsend(PvmDataDefault); rc = pvm_pkstr(helloworld); for (i=0; i<NTASKS; i++) rc = pvm_send(tids[i], msgtype); printf("All done. Leaving hello.master.\n"); rc = pvm_exit();

MS Architecture: Slave #include <stdio.h> #include "pvm3.h" #define HELLO_MSGTYPE 1 main() { int mytid, msgtype, rc; char helloworld[13]; mytid = pvm_mytid(); msgtype = HELLO_MSGTYPE; rc = pvm_recv(-1, msgtype); rc = pvm_upkstr(helloworld); printf(" ***Reply from spawned process: %s : \n",helloworld); rc = pvm_exit(); }

PVM Hostfile A file containing a list of hostnames that defines your parallel virtual machine Hostnames are listed one per line Several options are available for customization: userid, password, location of pvmd, paths to executables, etc.

XPVM: An IDE for PVM

MPI MPI: Message Passing Interface Standard MPI is a library specification for message-passing, proposed as a standard by a broadly based committee of vendors, implementors, and users. MPI was designed for high performance on both massively parallel machines and on workstation clusters. As in all message-passing systems, MPI provides a means of synchronizing processes by stopping each one until they all have reached a specific “barrier” call. MPI was developed by a broadly based committee of vendors, implementors, and users.

MPI APIs It is a standard message passing API Specifies many variants of send/recv 9 send interface calls Eg., synchronous send, asynchronous send, ready send, asynchronous ready send Plus other defined APIs Process topologies Group operations Derived Data types Implemented and optimized by machine vendors

Collective Communications The principal collective operations operating upon data are MPI_Bcast() - Broadcast from root to all other processes MPI_Gather() - Gather values for group of processes MPI_Scatter() - Scatters buffer in parts to group of processes MPI_Alltoall() - Sends data from all processes to all processes MPI_Reduce() - Combine values on all processes to single value MPI_Reduce_scatter() - Combine values and scatter results MPI_Scan() - Compute prefix reductions of data on processes

References PVM: Web sites: www.epm.ornl.gov/pvm/pvm_home.html A Java site: http://www.cs.virginia.edu/~ajf2j/jpvm.html Book: PVM: Parallel Virtual Machine A Users' Guide and Tutorial for Networked Parallel Computing, Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Robert Manchek, Vaidy Sunderam (www.netlib.org/pvm3/book/pvm-book.html) MPI: Websites: http://www-unix.mcs.anl.gov/mpi A Java MPI: http://aspen.ucs.indiana.edu/pss/HPJava/mpiJava.html Book: MPI - The Complete Reference, Volume 1, The MPI Core, 2nd edition, Snir et al, MIT Press, 1999. Two freely available MPI libraries that you can download and install: LAM - http://www.lam-mpi.org MPICH - http://www-unix.mcs.anl.gov/mpi/mpich