Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface Portable Parallel Programs.
Advertisements

MPI Message Passing Interface
Winter, 2004CSS490 MPI1 CSS490 Group Communication and MPI Textbook Ch3 Instructor: Munehiro Fukuda These slides were compiled from the course textbook,
GridRPC Sources / Credits: IRISA/IFSIC IRISA/INRIA Thierry Priol et. al papers.
Types of Parallel Computers
Getting Started with MPI Self Test with solution.
Reference: Message Passing Fundamentals.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Reference: Getting Started with MPI.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
Portability Issues. The MPI standard was defined in May of This standardization effort was a response to the many incompatible versions of parallel.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
1 July 29, 2005 Distributed Computing 1:00 pm - 2:00 pm Introduction to MPI Barry Wilkinson Department of Computer Science UNC-Charlotte Consortium for.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
Mapping Techniques for Load Balancing
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
2a.1 Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
Grid Computing, B. Wilkinson, 20047a.1 Computational Grids.
9-2.1 “Grid-enabling” applications Part 2 Using Multiple Grid Computers to Solve a Single Problem MPI © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
User Datagram Protocol (UDP) Chapter 11. Know TCP/IP transfers datagrams around Forwarded based on destination’s IP address Forwarded based on destination’s.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
MPI Communications Point to Point Collective Communication Data Packaging.
The Socket Interface Chapter 21. Application Program Interface (API) Interface used between application programs and TCP/IP protocols Interface used between.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 11.
MPI (continue) An example for designing explicit message passing programs Advanced MPI concepts.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
1 Chapter 9 Distributed Shared Memory. 2 Making the main memory of a cluster of computers look as though it is a single memory with a single address space.
12.1 Parallel Programming Types of Parallel Computers Two principal types: 1.Single computer containing multiple processors - main memory is shared,
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
MPI Adriano Cruz ©2003 NCE/UFRJ e Adriano Cruz NCE e IM - UFRJ Summary n References n Introduction n Point-to-point communication n Collective.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Message Passing Interface Using resources from
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 12.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 6, 2006 Session 22.
Chun-Yuan Lin MPI-Programming training-1. Broadcast Sending same message to all processes concerned with problem. Multicast - sending same message to.
Message Passing Computing
MPI Message Passing Interface
More on MPI Nonblocking point-to-point routines Deadlock
MPI-Message Passing Interface
A Message Passing Standard for MPP and Workstations
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
Message-Passing Computing
Quiz Questions ITCS 4145/5145 Parallel Programming MPI
Introduction to parallelism and the Message Passing Interface
More on MPI Nonblocking point-to-point routines Deadlock
Barriers implementations
More Quiz Questions Parallel Programming MPI Non-blocking, synchronous, asynchronous message passing routines ITCS 4/5145 Parallel Programming, UNC-Charlotte,
Message-Passing Computing Message Passing Interface (MPI)
Synchronizing Computations
MPI Message Passing Interface
Programming Parallel Computers
Presentation transcript:

Message-Passing Computing Chapter 2

Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle message passing –HPF, Fortran M Using existing language with message passing libraries –MPI, PVM, P4

Message-Passing Programming using User-level Message-Passing Libraries Two primary mechanisms needed: 1.A method of creating separate processes for execution on different computers 2.A method of sending and receiving messages

Multiple program, multiple data (MPMD) model Source file Executable Processor 0Processorp - 1 Compile to suit processor Source file

Single Program Multiple Data (SPMD) model. Source file Executables Processor 0Processorp - 1 Compile to suit processor Basic MPI way Different processes merged into one program. Control statements select different parts for each processor to execute. All executables started together - static process creation

Multiple Program Multiple Data (MPMD) Model Process 1 Process 2 spawn(); Time Start execution of process 2 Separate programs for each processor. One processor executes master process. Other processes started from within master process - dynamic process creation.

Basic “point-to-point” Send and Receive Routines Process 1Process 2 send(&x, 2); recv(&y, 1); xy Movement of data Generic syntax (actual formats later) Passing a message between processes using send() and recv() library calls:

Synchronous Message Passing Routines that actually return when message transfer completed. Synchronous send routine Waits until complete message can be accepted by the receiving process before sending the message. Synchronous receive routine Waits until the message it is expecting arrives. Synchronous routines intrinsically perform two actions: They transfer data and they synchronize processes.

Synchronous send() and recv() using 3-way protocol Process 1Process 2 send(); recv(); Suspend Time process Acknowledgment Message Both processes continue (a) When send() occurs before recv() Process 1Process 2 recv(); send(); Suspend Time process Acknowledgment Message Both processes continue (b) When recv() occurs before send() Request to send

Asynchronous Message Passing Routines that do not wait for actions to complete before returning. Usually require local storage for messages. More than one version depending upon the actual semantics for returning. In general, they do not synchronize processes but allow processes to move forward sooner. Must be used with care.

MPI Definitions of Blocking and Non- Blocking Blocking - return after their local actions complete, though the message transfer may not have been completed. Non-blocking - return immediately. Assumes that data storage used for transfer not modified by subsequent statements prior to being used for transfer, and it is left to the programmer to ensure this. These terms may have different interpretations in other systems.

How message-passing routines return before message transfer completed Process 1Process 2 send(); recv(); Message buffer Read message buffer Continue process Time Message buffer needed between source and destination to hold message:

Asynchronous (blocking) routines changing to synchronous routines Once local actions completed and message is safely on its way, sending process can continue with subsequent work. Buffers only of finite length and a point could be reached when send routine held up because all available buffer space exhausted. Then, send routine will wait until storage becomes re- available - i.e then routine behaves as a synchronous routine.

Message Tag Used to differentiate between different types of messages being sent. Message tag is carried within message. If special type matching is not required, a wild card message tag is used, so that the recv() will match with any send().

Message Tag Example Process 1Process 2 send(&x,2,5); recv(&y,1,5); xy Movement of data Waits for a message from process 1 with a tag of 5 To send a message, x, with message tag 5 from a source process, 1, to a destination process, 2, and assign to y:

“Group” message passing routines Have routines that send message(s) to a group of processes or receive message(s) from a group of processes Higher efficiency than separate point-to-point routines although not absolutely necessary.

Broadcast Sending same message to all processes concerned with problem. Multicast - sending same message to defined group of processes. bcast(); buf bcast(); data bcast(); data Process 0Processp - 1Process 1 Action Code MPI form

Scatter scatter(); buf scatter(); data scatter(); data Process 0Processp - 1Process 1 Action Code MPI form Sending each element of an array in root process to a separate process. Contents of ith location of array sent to ith process.

Gather gather(); buf gather(); data gather(); data Process 0Processp - 1Process 1 Action Code MPI form Having one process collect individual values from set of processes.

Reduce reduce(); buf reduce(); data reduce(); data Process 0Processp - 1Process 1 + Action Code MPI form Gather operation combined with specified arithmetic/logical operation. Example: Values could be gathered and then added together by root:

PVM (Parallel Virtual Machine) Perhaps first widely adopted attempt at using a workstation cluster as a multicomputer platform, developed by Oak Ridge National Laboratories. Available at no charge. Programmer decomposes problem into separate programs (usually master and group of identical slave programs). Programs compiled to execute on specific types of computers. Set of computers used on a problem first must be defined prior to executing the programs (in a hostfile).

Message routing between computers done by PVM daemon processes installed by PVM on computers that form the virtual machine. PVM Application daemon program Workstation PVM daemon Application program Application program PVM daemon Workstation Workstation Messages sent through network (executable) (executable) (executable). Can have more than one process running on each computer.

MPI (Message Passing Interface) Message passing library standard developed by group of academics and industrial partners to foster more widespread use and portability. Defines routines, not implementation. Several free implementations exist.

MPI Process Creation and Execution Purposely not defined - Will depend upon implementation. Only static process creation supported in MPI version 1. All processes must be defined prior to execution and started together. Originally SPMD model of computation. MPMD also possible with static creation - each program to be started together specified.