Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.

Slides:



Advertisements
Similar presentations
1 Non-Blocking Communications. 2 #include int main(int argc, char **argv) { int my_rank, ncpus; int left_neighbor, right_neighbor; int data_received=-1;
Advertisements

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Sahalu Junaidu ICS 573: High Performance Computing 8.1 Topic Overview Matrix-Matrix Multiplication Block Matrix Operations A Simple Parallel Matrix-Matrix.
1 Buffers l When you send data, where does it go? One possibility is: Process 0Process 1 User data Local buffer the network User data Local buffer.
Reference: / MPI Program Structure.
High Performance Computing
Reference: / Point-to-Point Communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms.
Distributed Systems CS Programming Models- Part II Lecture 17, Nov 2, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
CS 179: GPU Programming Lecture 20: Cross-system communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
2a.1 Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
Steve Lantz Computing and Information Science Distributed Memory Programming Using Advanced MPI (Message Passing Interface)
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
CS 484. Message Passing Based on multi-processor Set of independent processors Connected via some communication net All communication between processes.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Send/Receive Blocked/Unblocked Tom Murphy Director of Contra Costa College High Performance Computing Center Message Passing Interface BWUPEP2011,
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE MPI 2 Part II NPACI Parallel Computing Institute.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Message Passing Interface Using resources from
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
MPI_Alltoall By: Jason Michalske. What is MPI_Alltoall? Each process sends distinct data to each receiver. The Jth block of process I is received by process.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Chapter 4.
Introduction to parallel computing concepts and technics
MPI Point to Point Communication
MPI Message Passing Interface
Send and Receive.
CS 584.
Send and Receive.
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
MPI: Message Passing Interface
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
Send and Receive.
Message-Passing Computing Message Passing Interface (MPI)
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

Chapter 5

Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified by subsequent statements in the program MPI_Send: the message envelop has been created and the message has been sent or the contents of the message have been copied into a system buffer MPI_Recv: the message has been received into the buffer specified by the buffer argument

Nonblocking Communication The resources available to the sending or receiving process are not being fully utilized The send operation should be able to proceed concurrently with some computation, as long as the computation doesn’t modify any of the arguments to the send operation For the receiving process, if the data to be received is not yet available, the process should be able to continue with useful computation as long as it doesn’t interfere with the arguments to the receive Nonblocking communication is explicitly designed to meet these needs

Nonblocking Communication A call to a nonblocking send or receive simply starts, or posts, the communication operation Then up to the user program to explicitly complete the communication at some later points in the program Any nonblocking operation requires a minimum of two function calls: a call to start the operation and a call to complete the operation The basic functions in MPI for starting nonblocking communication are MPI_Isend and MPI_Irecv. “I” stands for “immediate”, return immediately

MPI_Isend and MPI_Irecv Prototypes for MPI_Isend and MPI_Irecv functions MPI_Isend(void* buffer, int count, MPI_Datatype datatype, int destination, int tag, MPI_Comm communicator, MPI_Request* request) MPI_Irecv(void* buffer, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm communicator, MPI_Request* request)

MPI_Isend and MPI_Irecv The common parameters with MPI_Send and MPI_Recv have the same meaning The semantics are different Both calls only start the operation MPI_Isend: the system has been informed that it can start copying data out of the send buffer (either to a system buffer or to the destination) MPI_Irecv: the system has been informed that it can start copying data into the buffer Neither send nor receive buffers should be modified until the operations are explicitly completed or canceled

MPI_Isend and MPI_Irecv The request parameter is a handle associated to an opaque object The object referenced by request is a system defined, and it cannot be directly accessed by the user Its purpose is to identify the operation started by the nonblocking call It will contain information on such things as the source or destination, the tag, the communicator, and the buffer When the nonblocking operation is completed, the request initialized by the call to MPI_Isend or MPI_Irecv is used to identify the operation to be completed

MPI_Wait There are a variety of functions that MPI uses to complete nonblocking operations. The simplest one is MPI_Wait. It can be used to complete any nonblocking operation Its prototype is MPI_Wait ( MPI_Request* request, MPI_Status* status) The request corresponds to that returned by MPI_Isend or MPI_Irecv. MPI_Wait blocks until the operation identified by request completes If it was a send, either the message has been sent or buffered by the system If it was a receive, the message has been copied into the receive buffer

MPI_Wait When MPI_Wait returns, request is set to MPI_REQUEST_NULL It means that there is no pending operation associated to request If the call to MPI_Wait is used to complete an operation started by MPI_Irecv, the information returned in the status parameter is the same as the information returned in status by a call to MPI_Recv It is perfectly legal to match blocking operations with nonblocking operations. A message sent with MPI_Isend can be received by a call to MPI_Recv

Example 1 #include int main(int argc,**argv) { int myid, nprocs; int buffer; MPI_Status status; MPI_Request request; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); request=MPI_REQUEST_NULL; if(myid == 0){ buffer=1234; MPI_Isend(&buffer,1,MPI_INT,1,1,MPI_COMM_WORLD,&request); } if(myid == 1){ MPI_Irecv(&buffer,1,MPI_INT,0,1,MPI_COMM_WORLD,&request); } MPI_Wait(&request,&status); if(myid == 0){ printf("Processor %d sent %d\n",myid,buffer); } if(myid == 1){ printf("Processor %d got %d\n",myid,buffer); } MPI_Finalize(); }

Example 1 Processor 0 sent 1234 Processor 1 got 1234

Example 2 #include int main(int argc, char **argv) { int my_rank, nprocs; int left, right; int received=-1; int tag = 1; MPI_Status statSend, statRecv; MPI_Request reqSend, reqRecv; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); MPI_Comm_size(MPI_COMM_WORLD, &nprocs); left = (my_rank-1 + nprocs)%nprocs; right = (my_rank+1)%nprocs; MPI_Isend(&my_rank,1,MPI_INT,left,tag,MPI_COMM_WORLD,&reqSend); MPI_Irecv(&received,1,MPI_INT,right,tag,MPI_COMM_WORLD,&reqRecv); MPI_Wait(&reqSend, &statSend); MPI_Wait(&reqRecv, &statRecv); printf("Totally %d processors, processor %d received from right neighbor processor: %d\n", nprocs, my_rank, received); MPI_Finalize(); return 0; }

Example 2 Totally 8 processors, processor 7 received from right neighbor processor: 0 Totally 8 processors, processor 6 received from right neighbor processor: 7 Totally 8 processors, processor 3 received from right neighbor processor: 4 Totally 8 processors, processor 5 received from right neighbor processor: 6 Totally 8 processors, processor 4 received from right neighbor processor: 5 Totally 8 processors, processor 0 received from right neighbor processor: 1 Totally 8 processors, processor 1 received from right neighbor processor: 2 Totally 8 processors, processor 2 received from right neighbor processor: 3