MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

1 Non-Blocking Communications. 2 #include int main(int argc, char **argv) { int my_rank, ncpus; int left_neighbor, right_neighbor; int data_received=-1;
Parallel Processing1 Parallel Processing (CS 667) Lecture 9: Advanced Point to Point Communication Jeremy R. Johnson *Parts of this lecture was derived.
1 Buffers l When you send data, where does it go? One possibility is: Process 0Process 1 User data Local buffer the network User data Local buffer.
Reference: / Point-to-Point Communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Lesson2 Point-to-point semantics Embarrassingly Parallel Examples.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms.
Distributed Systems CS Programming Models- Part II Lecture 17, Nov 2, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
A Brief Look At MPI’s Point To Point Communication Brian T. Smith Professor, Department of Computer Science Director, Albuquerque High Performance Computing.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
1 What is message passing? l Data transfer plus synchronization l Requires cooperation of sender and receiver l Cooperation not always apparent in code.
CS 179: GPU Programming Lecture 20: Cross-system communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
1 Choosing MPI Alternatives l MPI offers may ways to accomplish the same task l Which is best? »Just like everything else, it depends on the vendor, system.
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
1 MPI Primer Lesson 10 2 What is MPI MPI is the standard for multi- computer and cluster message passing introduced by the Message-Passing Interface.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Performance Oriented MPI Jeffrey M. Squyres Andrew Lumsdaine NERSC/LBNL and U. Notre Dame.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
CS 420 – Design of Algorithms MPI Data Types Basic Message Passing - sends/receives.
MPI Send/Receive Blocked/Unblocked Tom Murphy Director of Contra Costa College High Performance Computing Center Message Passing Interface BWUPEP2011,
An Introduction to Parallel Programming with MPI March 22, 24, 29, David Adams
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
11/04/2010CS4961 CS4961 Parallel Programming Lecture 19: Message Passing, cont. Mary Hall November 4,
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Its.unc.edu 1 University of North Carolina - Chapel Hill ITS Research Computing Instructor: Mark Reed Point to Point Communication.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE MPI 2 Part II NPACI Parallel Computing Institute.
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
Message Passing Interface Using resources from
Distributed Systems CS Programming Models- Part II Lecture 14, Oct 28, 2013 Mohammad Hammoud 1.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
An Introduction to Parallel Programming with MPI February 17, 19, 24, David Adams
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
CS 4410 – Parallel Computing 1 Chap 9 CS 4410 – Parallel Computing Dr. Dave Gallagher Chap 9 Manager Worker.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Introduction to parallel computing concepts and technics
CS4402 – Parallel Computing
MPI Point to Point Communication
Introduction to MPI.
Blocking / Non-Blocking Send and Receive Operations
Parallel Programming with MPI and OpenMP
MPI-Message Passing Interface
Distributed Systems CS
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
May 19 Lecture Outline Introduce MPI functionality
Introduction to parallelism and the Message Passing Interface
Barriers implementations
Synchronizing Computations
September 4, 1997 Parallel Processing (CS 667) Lecture 9: Advanced Point to Point Communication Jeremy R. Johnson *Parts of this lecture was derived.
5- Message-Passing Programming
September 4, 1997 Parallel Processing (CS 730) Lecture 9: Advanced Point to Point Communication Jeremy R. Johnson *Parts of this lecture was derived.
Presentation transcript:

MPI Point to Point Communication CDP 1

Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer Space for storing messages Depending upon the type of send/receive operation Allows communication to be asynchronous Handled by the system For MPI_Send The MPI standard allows the implementation to use a system buffer but does not require it 2 mpi.h MPI Library Application System Buffer Application Buffer

Communication modes Standard send The basic send operation used to transmit data from one process to another Synchronous send Blocks until the corresponding receive has actually occurred on the destination process Ready send When a matching receive has already been posted otherwise, an error occurs Buffered send The programmer allocates a buffer for the data to be placed in until it can be sent Standard receive The basic receive operation used to accept data sent from another process. May be used to receive data from any of the possible send operations 3

Blocking communication A communication routine is blocking if the completion of the call is dependent on certain "events" Sends The data must be successfully sent or safely copied to system buffer space, so that the application buffer that contained the data is available for reuse Receives The data must be safely stored in the receive buffer, so that it is ready for use This is MPI’s way to avoid data race between the application and the MPI library 4

Blocking Synchronous Send Blocks until the corresponding receive has actually occurred on the destination process 5 Most of the waiting on the sending end can be eliminated if MPI_Recv will come before MPI_Ssend

Blocking Ready Send When a matching receive has already been posted otherwise, an error occurs 6 By default, the program exits if MPI_Rsend is called before notification arrives (no error code will be returned)

Blocking Buffered Send The programmer allocates a buffer for the data to be placed in until it can be sent 7 Timing of MPI_Recv is irrelevant. MPI_Bsend returns as soon as data are copied from source to a buffer.

Blocking Standard Send 8

The basic send operation used to transmit data from one process to another Message size > threshold 9

Deadlock Both processes send message to each other No one will be able to call a corresponding MPI_Recv 10

Avoiding deadlock Different ordering of calls between tasks Buffered mode Non-blocking calls Later on… Other communication functions For example: MPI_Sendrecv (not in this course) 11

Conclusions: Modes Synchronous mode “Safest“ Most portable Ready mode Lowest total overhead Buffered mode Decouples sender from receiver Allows user control Standard mode Implementation-specific compromise 12

Non-blocking communication Method returns immediately Unsafe to use buffers before actual completion Check status of call via dedicated methods later on… Allows overlapping of computation and communication 13

Non-blocking standard send Non-blocking receive int MPI_Isend(…, MPI_Request *request) int MPI_Irecv(…, MPI_Request *request) 14

Non-blocking standard send Non-blocking receive int MPI_Isend(…, MPI_Request *request) int MPI_Irecv(…, MPI_Request *request) 15

Conclusions: Non-blocking calls Gains Avoid deadlock Decrease synchronization overhead Post non-blocking sends and receives as early as possible Call wait as late as possible Must avoid writing to send buffer between MPI_Isend and MPI_Wait Must avoid reading and writing in receive buffer between MPI_Irecv and MPI_Wait Careful when using local variables (on the stack) that might go out of scope before they have been written to or read from! 16

Information About Non-Blocking MPI Call MPI_Wait int MPI_Wait(MPI_Request *request, MPI_Status *status) Blocking until communication is completed Useful for both sender and receiver of non-blocking communications MPI_Test int MPI_Test(MPI_Request *request, int *flag, MPI_Status *status) Non Blocking check for status of communication Useful for both sender and receiver of non-blocking communications 17

Probing for Messages MPI_Probe int MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status *status) Receiver is notified when messages arrive and are ready to be processed From potentially any sender (MPI_ANY_SOURCE) With potentially any tag (MPI_ANY_TAG) Can check via the status struct the actual size of the message and allocate a buffer accordingly This is a blocking call Alternatively, we can use MPI_Iprobe Can wait for any message then receive the message to a dedicated buffer As oppose to receive the messages in a pre-defined order 18

Information About Messages MPI_Status contains information about received message int MPI_Recv/Wait/Test/Probe/Iprobe(…, MPI_Status *status) status.MPI_SOURCE is the actual source of the message status.MPI_TAG is the actual tag of the message status.MPI_ERROR is a return code (of type MPI_ERROR) int MPI_Get_count (MPI_Status *status, MPI_Datatype datatype, int *count) Returns number of received elements in a message Circumstances in which it makes sense to check the status Blocking receive or wait, when MPI_ANY_TAG or MPI_ANY_SOURCE has been used MPI_Probe or MPI_Iprobe to obtain information about incoming messages 19

Null Process Each process sends a message to its neighbor It must check if it is in the edge for (every partition) if (right-neighbor is not edge) send right-neighbor; (or recv/test/wait) if (left-neighbor is not edge) send left-neighbor; Instead, we can define its neighbor to be MPI_PROC_NULL MPI call with MPI_PROC_NULL as destination/source will return immediately without any side-effect The code is simplified and the checks is moved inside the MPI call for (every partition) send right-neighbor; send left-neighbor; 20

Programming recommendations Start with non-blocking calls Don’t start with blocking calls when you know you will switch later. The transition might not be trivial. Blocking calls Use if you want tasks to synchronize Use if wait immediately follows communication call Start with synchronous, then switch to standard mode Evaluate performance and analyze code If non-blocking receives are posted early, might consider ready mode If there is too much synchronization overhead on sends, could switch to buffered mode Avoid deadlock by intelligent arrangement of sends/receives, or by posting non-blocking receives early Intelligent use of wildcards can greatly simplify logic and code Null processes move tests out of user code 21