Please visit our web site: Point-to-Point Communication.

Slides:



Advertisements
Similar presentations
1 Buffers l When you send data, where does it go? One possibility is: Process 0Process 1 User data Local buffer the network User data Local buffer.
Advertisements

Reference: / MPI Program Structure.
MPI_Gatherv CISC372 Fall 2006 Andrew Toy Tom Lynch Bill Meehan.
Reference: / Point-to-Point Communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
CS 240A: Models of parallel programming: Distributed memory and MPI.
SOME BASIC MPI ROUTINES With formal datatypes specified.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Collective Communication.  Collective communication is defined as communication that involves a group of processes  More restrictive than point to point.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
A Brief Look At MPI’s Point To Point Communication Brian T. Smith Professor, Department of Computer Science Director, Albuquerque High Performance Computing.
CS 179: GPU Programming Lecture 20: Cross-system communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
1 What is MPI?  MPI = Message Passing Interface  Specification of message passing libraries for developers and users  Not a library by itself, but specifies.
1 MPI Primer Lesson 10 2 What is MPI MPI is the standard for multi- computer and cluster message passing introduced by the Message-Passing Interface.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
MPI Send/Receive Blocked/Unblocked Tom Murphy Director of Contra Costa College High Performance Computing Center Message Passing Interface BWUPEP2011,
An Introduction to Parallel Programming with MPI March 22, 24, 29, David Adams
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
Parallel Programming with MPI By, Santosh K Jena..
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Lecture 6: Message Passing Interface (MPI). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
Its.unc.edu 1 University of North Carolina - Chapel Hill ITS Research Computing Instructor: Mark Reed Point to Point Communication.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
An Introduction to MPI (message passing interface)
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE MPI 2 Part II NPACI Parallel Computing Institute.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Message Passing Interface Using resources from
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
An Introduction to Parallel Programming with MPI February 17, 19, 24, David Adams
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
Distributed Processing with MPI International Summer School 2015 Tomsk Polytechnic University Assistant Professor Dr. Sergey Axyonov.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Introduction to parallel computing concepts and technics
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Message Passing Interface (cont.) Topologies.
CS4402 – Parallel Computing
MPI Point to Point Communication
Introduction to MPI.
MPI Message Passing Interface
CS 584.
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
Introduction to Parallel Computing with MPI
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Message-Passing Computing Message Passing Interface (MPI)
Hello, world in MPI #include <stdio.h> #include "mpi.h"
5- Message-Passing Programming
Hello, world in MPI #include <stdio.h> #include "mpi.h"
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Please visit our web site: Point-to-Point Communication

Please visit our web site: Kinds of Communication Blocking and Nonblocking Communications

Please visit our web site: Kinds of Communication  Blocking Sender does not return from an MPI call until the message buffer (the user ’ s container for the message) can be reused. i.e. the message has been sent Receiver does not return until the receiving message buffer contains all of the message

Please visit our web site: Kinds of Communication  Non-blocking Sender call returns after nonblocking send start call initiates the send operation. A separate send complete call is needed to complete the communication, i.e., to verify that the data has been copied out of the send buffer Receiver call returns after nonblocking receive start call initiates the receive operation. A separate receive complete call is needed to complete the receive operation and verify that the data has been received into the receive buffer. Other MPI procedures test or wait for the completion of sends and receives

Please visit our web site: Communication Modes  Standard  Buffered  Synchronous  Ready

Please visit our web site: Communication Modes  Modes are determined by the name of the MPI SEND procedure used e.g. MPI_BSEND specifies a buffered send  Standard (no letter) up to MPI to decide whether outgoing messages will be buffered MPI may buffer outgoing messages  the send call may complete before a matching receive is invoked. MPI may choose not to buffer outgoing messages, for performance reasons  the send call will not complete until a matching receive has been posted Non-local operation – another process may have ‘ to do something ’ before this operation completes, e.g. successful completion of the send operation may depend on the occurrence of a matching receive.

Please visit our web site: Communication Modes  Buffered (B letter) It can be started whether or not a matching receive has been posted and it may complete before a matching receive is posted Therefore MPI must buffer the outgoing message, so as to allow the send call to complete Local operation – another process does not have to do anything before this operation completes, e.g. buffered mode send operation may complete before a matching receive is posted

Please visit our web site: Communication Modes  Synchronous (S letter) It can be started whether or not a matching receive was posted Complete successfully only if a matching receive is posted, and the receive operation has started to receive the message Thus, the completion of a synchronous send not only indicates that the send buffer can be reused, but also indicates that the receiver has reached a certain point in its execution Non-local operation

Please visit our web site: Communication Modes  Ready (R letter) Sender starts only if the matching receive has been posted Otherwise, the operation is erroneous and its outcome is undefined The completion of the send operation does not depend on the status of a matching receive, and merely indicates that the send buffer can be reused Non-local operation

Please visit our web site: Communication Modes  A possible communication protocol for the various communication modes is outlined below. ready send: The message is sent as soon as possible. synchronous send: The sender sends a request- to-send message. The receiver stores this request. When a matching receive is posted, the receiver sends back a permission-to-send message, and the sender now sends the message. standard send: First protocol may be used for short messages, and second protocol for long messages. buffered send: The sender copies the message into a buffer and then sends it with a nonblocking send (using the same protocol as for standard send).

Please visit our web site: Graphical Representation of the Implementation Models User Data Buffer User Data Buffer Send Buffer UsedNo Receive Buffer Used SenderReceiver

Please visit our web site: Graphical Representation of the Implementation Models User Data Buffer User Data Buffer Send Buffer UsedReceive Buffer Used SenderReceiver

Please visit our web site: Graphical Representation of the Implementation Models User Data Buffer User Data Buffer No Send Buffer UsedNo Receive Buffer Used SenderReceiver

Please visit our web site: Graphical Representation of the Implementation Models User Data Buffer User Data Buffer No Send Buffer UsedReceive Buffer Used SenderReceiver

Please visit our web site: Point-to-Point Communication Blocking Functions

Please visit our web site: Blocking Functions – MPI_SEND  Standard send  This routine may block until the message is received  C int MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) Input Parameters  buf: initial address of send buffer (choice)  count: number of elements in send buffer (non negative integer)  datatype: datatype of each send buffer element (handle)  dest: rank of destination (integer)  tag: message tag (integer)  comm: communicator (handle)  Fortran MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR

Please visit our web site: Blocking Functions – MPI_RECV  basic receive  MPI_ANY_SOURCE receives from any source in the communicator  MPI_ANY_TAG accepts any incoming message tag  C int MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) Input Parameters  count: maximum number of elements in receive buffer (integer)  datatype: datatype of each receive buffer element (handle)  source: rank of source (integer)  tag: message tag (integer)  comm: communicator (handle) Output Parameters  buf: initial address of receive buffer (choice)  status: status object (Status) contains information on data that was actually received, e.g. MPI_SOURCE, MPI_TAG, MPI_ERROR, and other information not directly accessible to programmer

Please visit our web site: Blocking Functions – MPI_RECV  Fortran MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR) BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR

Please visit our web site: Blocking Functions – MPI_BSEND  Send in buffered mode  All parameters are the same as MPI_SEND  C int MPI_Bsend (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)  Fortran MPI_BSEND(BUF, COUNT, DATATYPE, DEST, TAG, COM, IERR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR

Please visit our web site: Blocking Functions – MPI_SSEND  Send in synchronous mode  All parameters are the same as MPI_SEND  C int MPI_Ssend (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)  Fortran MPI_SSEND(BUF, COUNT, DATATYPE, DEST, TAG, COM, IERR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR

Please visit our web site: Blocking Functions – MPI_RSEND  Send in ready mode  All parameters are the same as MPI_SEND  C int MPI_Rsend (void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)  Fortran MPI_RSEND(BUF, COUNT, DATATYPE, DEST, TAG, COM, IERR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR

Please visit our web site: Blocking Functions – MPI_SENDRECV  The blocking send and receive operations combine in one call the sending of a message to one destination and the receiving of another message, from another process  Very useful for executing a shift operation across a chain of processes  C int MPI_Sendrecv(void *sendbuf, int sendcount, MPI_Datatype sendtype, int dest, int sendtag, void *recvbuf, int recvcount, MPI_Datatype recvtype, int source, MPI_Datatype recvtag, MPI_Comm comm, MPI_Status *status)

Please visit our web site: Blocking Functions – MPI_SENDRECV Input Parameters  sendbuf: initial address of send buffer (choice)  sendcount: number of elements in send buffer (integer)  sendtype: type of elements in send buffer (handle)  dest: rank of destination (integer)  sendtag: send tag (integer)  recvcount: number of elements in receive buffer (integer)  recvtype: type of elements in receive buffer (handle)  source: rank of source (integer)  recvtag: receive tag (integer)  comm: communicator (handle)

Please visit our web site: Blocking Functions – MPI_SENDRECV Output Parameters  recvbuf: initial address of receive buffer (choice)  status: status object (Status)  Fortran MPI_SENDRECV(SENDBUF, SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVBUF, RECVCOUNT, RECVTYPE, SOURCE, RECVTAG, COMM, STATUS, IERROR) SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVCOUNT, RECVTYPE, SOURCE, RECV TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR

Please visit our web site: Blocking Functions – MPI_PROBE  Blocking test for a message. Does not “ receive ” the message – a subsequent MPI_Recv() will receive it.  MPI_PROBE behaves like MPI_IPROBE except that it is a blocking call that returns only after a matching message has been found.  C int MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status *status) Input Parameters  source: source rank, or MPI_ANY_SOURCE (integer)  tag: tag value, or MPI_ANY_TAG (integer)  comm: communicator (handle) Output Parameter  status: status object (Status)  Fortran MPI_PROBE(SOURCE, TAG, COMM, STATUS, IERROR) INTEGER SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR

Please visit our web site: Blocking Functions – MPI_BUFFER_ATTACH  Use in MPI_BSEND  Provides to MPI a buffer in the user's memory to be used for buffering outgoing messages  Only one buffer can be attached to a process at a time  C int MPI_Buffer_attach (void* buffer, int size) Input Parameters  buffer: initial buffer address (choice)  size: buffer size, in bytes (integer)  Fortran MPI_BUFFER_ATTACH (BUFFER, SIZE, IERROR) BUFFER(*) INTEGER SIZE, IERROR

Please visit our web site: Blocking Functions – MPI_BUFFER_DETACH  Use in MPI_BSEND  Detach the buffer currently associated with MPI  C int MPI_Buffer_detach( void* buffer_addr, int* size) Output Parameters  buffer_addr: initial buffer address (choice)  size: buffer size, in bytes (integer)  Fortran MPI_BUFFER_DETACH( BUFFER_ADDR, SIZE, IERROR) BUFFER_ADDR(*) INTEGER SIZE, IERROR

Please visit our web site: MPI_GET_COUNT  Returns the number of entries received. (We count entries, each of type datatype, not bytes.)  C int MPI_Get_count(MPI_Status *status, MPI_Datatype datatype, int *count) Input Parameters  status: return status of receive operation (Status)  datatype: datatype of each receive buffer entry (handle) Output Paramter  count: number of received entries (integer)  Fortran MPI_GET_COUNT(STATUS, DATATYPE, COUNT, IERROR) INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE, COUNT, IERROR

Please visit our web site: MPI_SEND 1  The root node sends out a message process 1. Process 1 send back to Process 0.  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Comm_size MPI_Finalize MPI_Send MPI_Recv MPI_Get_count

Please visit our web site: MPI_SEND 1 (C)  /*  // The root node sends out a message process 1. Process 1 send back to Process 0.  */  #include //for input/output  #include //for mpi routines  #define BUFSIZE 64//The size of the messege being passed  main( int argc, char** argv)  {  int my_rank;//the rank of this process  int n_processes;//the total number of processes  char buf[BUFSIZE];//a buffer for the messege  int tag=0;//not important here  int count;  MPI_Status status;//not important here  MPI_Init(&argc, &argv); //Initializing mpi  MPI_Comm_size(MPI_COMM_WORLD, &n_processes);//Getting # of processes  MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);//Getting my rank  if( my_rank == 0 )  {  //send to the next node */  printf("Hello world! I am %d of %d, sending to Proc %d\n", my_rank, n_processes, my_rank+1);  MPI_Send(buf, BUFSIZE, MPI_CHAR, my_rank+1, tag, MPI_COMM_WORLD);

Please visit our web site: MPI_SEND 1 (C)  //recieve from the last node */  MPI_Recv(buf, BUFSIZE, MPI_CHAR, n_processes-1, tag, MPI_COMM_WORLD, &status);  MPI_Get_count(&status, MPI_CHAR, &count);  printf("Hello world! I am %d of %d, recv %d entries from proc 1\n", my_rank, n_processes, count);  }  else  {  //recieve from proc 0  MPI_Recv(buf, BUFSIZE, MPI_CHAR, 0, tag, MPI_COMM_WORLD, &status);  MPI_Get_count(&status, MPI_CHAR, &count);  printf("Hello world! I am %d of %d, recv %d entries from Proc 0\n", my_rank, n_processes, count);  //send to the next node  printf("Hello world! I am %d of %d, sending to Proc 0\n", my_rank, n_processes);  MPI_Send(buf, BUFSIZE, MPI_CHAR, 0, tag, MPI_COMM_WORLD);  }  MPI_Finalize();//Finalize MPI  return 0;  }

Please visit our web site: MPI_SEND 1 (Fortran) C CThe root node sends out a message process 1. Process 1 send back to Process 0. C program main include 'mpif.h' parameter (BUFSIZE = 64) integer my_rank, n_processes, tag integer ierr integer count, size integer status(MPI_STATUS_SIZE) double precision buf(BUFSIZE) size = BUFSIZE tag = 2001 call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, my_rank, ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, n_processes, ierr) if (my_rank.eq. 0) then C/* send to the next node */ print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', sending to Proc ', my_rank+1

Please visit our web site: MPI_SEND 1 (Fortran) call MPI_SEND(buf, size, MPI_DOUBLE_PRECISION, my_rank+1, 1tag, MPI_COMM_WORLD, ierr) C/* recieve from the last node */ call MPI_RECV(buf, size, MPI_DOUBLE_PRECISION, n_processes-1, 1tag, MPI_COMM_WORLD, status, ierr) call MPI_GET_COUNT(status, MPI_DOUBLE_PRECISION, count, ierr) print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', recv ', count,' entries from proc 0' else C/* recieve from the previous node */ call MPI_RECV(buf, size, MPI_DOUBLE_PRECISION, my_rank-1, tag, 1MPI_COMM_WORLD, status, ierr) call MPI_GET_COUNT(status, MPI_DOUBLE_PRECISION, count, ierr) print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', recv ', count,' entries from Proc 0' C/* send to the next node */ print *, 'Hello world! I am ', my_rank, ' of ', n_processes, ' sending to Proc 0' call MPI_SEND(buf, size, MPI_DOUBLE_PRECISION, 0, tag, MPI_COMM_WORLD, ierr) endif call MPI_FINALIZE(ierr) end

Please visit our web site: MPI_SEND 1 (Fortran) call MPI_GET_COUNT(status, MPI_DOUBLE_PRECISION, count, ierr) print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', recv ', count,' entries from proc 0' else C/* recieve from the previous node */ call MPI_RECV(buf, size, MPI_DOUBLE_PRECISION, my_rank-1, tag, 1MPI_COMM_WORLD, status, ierr) call MPI_GET_COUNT(status, MPI_DOUBLE_PRECISION, count, ierr) print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', recv ', count,' entries from proc 1' C/* send to the next node */ print *, 'Hello world! I am ', my_rank, ' of ', n_processes, 1', sending to ', mod((my_rank+1), n_processes) call MPI_SEND(buf, size, MPI_DOUBLE_PRECISION, mod 1((my_rank+1), n_processes), tag, MPI_COMM_WORLD, ierr) endif finish=MPI_Wtime() C/* Print out the results. */ if (my_rank.eq. 0) then print *, 'Total time used was ', finish-start, ' seconds' endif call MPI_FINALIZE(ierr) end

Please visit our web site: MPI_BSEND 1  Using MPI_Bsend to send a single message in buffered mode  A buffered mode send operation can be started whether or not a matching receive has been posted  In this program, the MPI_Bsend completes before a matching receive is posted  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Finalize MPI_Bsend MPI_Recv MPI_Buffer_attach MPI_Buffer_detach

Please visit our web site: MPI_BSEND 1 (C)  // use MPI_Bsend to send a single message in buffered mode  // note that a buffered mode send operation can be started  // whether or not a matching receive has been posted  // in this case, the MPI_Bsend completes before a matching receive is posted  #include // for input/output  #include //for mpi routines  #define BUFSIZE 2048//The size of the messege being passed  main( int argc, char** argv)  {  double start,finish;  int rank;//the rank of this process  int tag = 0;  int bufsize, abufsize;  float *buf, *abuf, *message;  MPI_Status status;//not important here  bufsize = (BUFSIZE * sizeof(float) + MPI_BSEND_OVERHEAD);  buf = (float *)malloc(bufsize);  message = (float *)malloc(sizeof(float) * BUFSIZE);

Please visit our web site: MPI_BSEND 1 (C)  MPI_Init(&argc, &argv);//Initializing mpi  MPI_Comm_rank(MPI_COMM_WORLD, &rank);//Getting my rank  if( rank == 0 )  {  printf("Hello world! I am proc 0, sending to proc 1..\n");  MPI_Buffer_attach(buf, bufsize);  /* a buffer can now be used by MPI_Bsend */  //send to the proc 1  MPI_Bsend(message, BUFSIZE, MPI_FLOAT, 1, tag, MPI_COMM_WORLD);  MPI_Buffer_detach(&abuf, &abufsize);  /* Buffer size reduced to zero */  free(abuf);  }  else if( rank == 1)  {  //sleep for 3 sec  sleep(3);  printf("Hello world! I am proc 1, just wake up!!\n");  //recieve from the proc 0  MPI_Recv(message, BUFSIZE, MPI_FLOAT, 0, tag, MPI_COMM_WORLD, &status);  printf("Hello world! Received 1 message from proc 1!!\n");  }  printf("Proc %d finished!!\n", rank);  MPI_Finalize();  return 0;  }

Please visit our web site: MPI_BSEND 1 (Fortran)  C  C The buffer size in MPI_BSEND must be interger  C If you put a parameter in it, eg.BUFSIZE, it will cause error  C  call MPI_BSEND(message, messagesize, MPI_REAL,  11, tag, MPI_COMM_WORLD, ierr)  call MPI_BUFFER_DETACH(BUFFER, size, ierr)  else if (rank == 1) then  call SLEEP(3)  print *, 'Hello world! I am proc ', rank,  1', just wake up!!'  call MPI_RECV(message, BUFSIZE, MPI_REAL,  10, tag, MPI_COMM_WORLD, status, ierr)  print *, 'Hello world! Received 1 message ',  1'proc 1!!'  endif  print *, 'Proc ', rank, ' finished!!'  call MPI_FINALIZE(ierr)  end

Please visit our web site: MPI_BSEND 1 (Fortran)  program main  include 'mpif.h'  parameter (BUFSIZE = 2048)  integer ierr, rank  integer tag, status(MPI_STATUS_SIZE)  integer size, messagesize  real message(BUFSIZE)  real BUFFER*(*)  call MPI_INIT(ierr)  call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)  tag = 0  messagesize = BUFSIZE  if (rank == 0) then  print *, 'Hello world! I am proc ', rank,  1', sending to proc 1..'  size = BUFSIZE * 4 + MPI_BSEND_OVERHEAD  call MPI_BUFFER_ATTACH(BUFFER, size, ierr)

Please visit our web site: MPI_BSEND 2  Using MPI_Bsend to send a several messages in buffered mode  Total sum of memory(buffer) must be allocated first, otherwise it will produce error  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Finalize MPI_Bsend MPI_Recv MPI_Buffer_attach MPI_Buffer_detach

Please visit our web site: MPI_BSEND 2 (C)  // use MPI_Bsend to send several messages in buffered mode  // note that the total sum of memory(buffer) must be allocated first,  // otherwise it will produce error  #include  #define M 3// the number of times of sending messages  int main( int argc, char** argv )  {  int n, i;  int rank;  int size;  int *buf;  int *abuf;  int blen;  int ablen;  MPI_Status status;  MPI_Init( &argc, &argv );  MPI_Comm_size( MPI_COMM_WORLD, &size );  MPI_Comm_rank( MPI_COMM_WORLD, &rank );  if( size != 2 ) {  if( rank == 0 ) {  printf("Error: 2 processes required\n");  fflush(stdout);  }  MPI_Abort(MPI_COMM_WORLD, MPI_ERR_OTHER );  }

Please visit our web site: MPI_BSEND 2 (C)  if( rank == 0 ){  blen = M * (sizeof(int) + MPI_BSEND_OVERHEAD);  buf = (int*) malloc(blen);  MPI_Buffer_attach(buf, blen);  printf("attached %d bytes\n", blen);  fflush(stdout);  for(i = 0; i < M; i ++) {  printf("starting send %d...\n", i);  fflush(stdout);  n = i;  MPI_Bsend(&n, 1, MPI_INT, 1, i, MPI_COMM_WORLD );  printf("complete send %d\n", i);  fflush(stdout);  sleep(1);  }  MPI_Buffer_detach(&abuf, &ablen);  printf("detached %d bytes\n", ablen);  free(abuf);  } else {  for(i = M - 1; i >= 0; i --) {  printf("starting recv %d...\n", i);  fflush(stdout);  MPI_Recv(&n, M, MPI_INT, 0, i, MPI_COMM_WORLD, &status );  printf("complete recv: %d. received %d\n", i, n);  fflush(stdout);  }  MPI_Finalize();  return 0;  }

Please visit our web site: MPI_BSEND 2 (Fortran)  program main  include 'mpif.h'  parameter (M = 3)  integer ierr, rank  integer tag, status(MPI_STATUS_SIZE)  integer size, messagesize  real buf*(*)  integer n, i  integer blen  call MPI_INIT(ierr)  call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr)  call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)  if( size.ne. 2 ) then  if( rank.eq. 0 ) then  print *, 'Error: 2 processes required'  endif  call MPI_Abort(MPI_COMM_WORLD, MPI_ERR_OTHER, ierr);  endif

Please visit our web site: MPI_BSEND 2 (Fortran)  if (rank == 0) then  blen = M * (4 + MPI_BSEND_OVERHEAD)  call MPI_BUFFER_ATTACH(buf, blen, ierr)  print *, 'attached ', blen, ' bytes'  do i = 0, M - 1  print *, 'starting send ', i  n = i  call MPI_BSEND(n, 1, MPI_INTEGER,  11, i, MPI_COMM_WORLD, ierr)  print *, 'complete send ', i  call SLEEP(1)  enddo  call MPI_BUFFER_DETACH(buf, blen, ierr)  print *, 'detached ', blen, ' bytes'  else if (rank == 1) then  do i = M - 1, 0, -1  print *, 'starting recv...', i  call MPI_RECV(n, M, MPI_INTEGER,  10, i, MPI_COMM_WORLD, status, ierr)  print *, 'complete recv: ', i, ' received ', n  enddo  endif  print *, 'Proc ', rank, ' finished!!'  call MPI_FINALIZE(ierr)  end

Please visit our web site: MPI_SENDRECV  All the processes send message to the next process simultaneously, and then receive message from the previous process  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Comm_size MPI_Sendrecv MPI_Finalize

Please visit our web site: MPI_SENDRECV (C)  /*  * Illustrate the usage of MPI_SEND_RECV  * All the proc send message to the next proc simultaneously,  * and then receive message from the previous proc  */  #include //for input/output  #include //for mpi routines  main( int argc, char** argv)  {  int rank;//the rank of this process  int tag=0;//not important here  int nproc;  int sendbuf[1], recvbuf[1];  int dest;//store the destination  MPI_Status status;//not important here

Please visit our web site: MPI_SENDRECV (C)  MPI_Init(&argc, &argv);//Initializing mpi  MPI_Comm_rank(MPI_COMM_WORLD, &rank);//Getting my rank  MPI_Comm_size(MPI_COMM_WORLD, &nproc); //Getting total no of proc  dest = (rank + 1) % nproc;  printf("Proc %d: Sending message to %d...\n", rank, dest);  if( rank == 0 )  {  MPI_Sendrecv(sendbuf, 1, MPI_INT, dest, tag, recvbuf, 1, MPI_INT, nproc - 1, tag, MPI_COMM_WORLD, &status);  }  else  {  MPI_Sendrecv(sendbuf, 1, MPI_INT, dest, tag, recvbuf, 1, MPI_INT, rank - 1, tag, MPI_COMM_WORLD, &status);  }  printf("Proc %d: Receive message from %d...\n", rank, status.MPI_SOURCE);  MPI_Finalize();//Finalize MPI  return 0;  }

Please visit our web site: MPI_SENDRECV (Fortran)  C/*  C* Illustrate the usage of MPI_SEND_RECV  C* All the proc send message to the next proc simultaneously,  C* and then receive message from the previous proc  C*/  program main  include 'mpif.h'  integer rank, tag  integer ierr, nproc, dest  integer sendbuf(1), recvbuf(1)  integer status(MPI_STATUS_SIZE)  tag = 0  CALL MPI_INIT(ierr)  CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)  CALL MPI_COMM_SIZE(MPI_COMM_WORLD, nproc, ierr)

Please visit our web site: MPI_SENDRECV (Fortran)  dest = MOD ((rank + 1), nproc)  print *, 'Proc ', rank, ': Sending message to ', dest, '...'  IF (rank.EQ.0) THEN  CALL MPI_SENDRECV(sendbuf, 1, MPI_INTEGER, dest, tag, recvbuf, 1, MPI_INTEGER, nproc - 1, tag, MPI_COMM_WORLD, status, ierr);  ELSE  CALL MPI_SENDRECV(sendbuf, 1, MPI_INTEGER, dest, tag, recvbuf, 1, MPI_INTEGER, rank - 1, tag, MPI_COMM_WORLD, status, ierr);  END IF  print *, 'Proc ', rank, ': Receive message from ', status(MPI_SOURCE), '...'  call MPI_FINALIZE(ierr)  end

Please visit our web site: MPI_PROBE  The process 0 and process 1 send messages to process 2 separately, process 2 calls MPI_PROBE to know whether the messages come, if so, it calls MPI_RECV to receive the messages  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Finalize MPI_Send MPI_Recv MPI_Probe

Please visit our web site: MPI_PROBE (C)  /*  * The proc 0 and proc 1 send to proc 2 separately,  * proc 2 call MPI_PROBE to know whether the messages come,  * if so, it calls MPI_RECV to receive the messages  * Compile: mpicc mpi_probe01.c -o mpi_probe01  * Run: mpirun -np 3 mpi_probe01  */  #include //for input/output  #include //for mpi routines  main( int argc, char** argv)  {  int rank;//the rank of this process  int tag=0;//not important here  int n, i[1];  float x[1];  MPI_Status status;//not important here  MPI_Init(&argc, &argv); //Initializing mpi  MPI_Comm_rank(MPI_COMM_WORLD, &rank);//Getting my rank  if( rank == 0 )  {  printf("Proc 0: Sleep for 5 sec before send...\n");  sleep(5);  printf("Proc 0: Sending message...\n");  MPI_Send(i, 1, MPI_INT, 2, tag, MPI_COMM_WORLD);  }

Please visit our web site: MPI_PROBE (C)  else if( rank == 1)  {  printf("Proc 1: Sleep for 3 sec before send...\n");  sleep(3);  printf("Proc 1: Sending message...\n");  MPI_Send(x, 1, MPI_FLOAT, 2, tag, MPI_COMM_WORLD);  }  else //rank == 2  {  for (n = 1; n <= 2; n ++)  {  printf("Proc 2: Wait for message comes...\n");  MPI_Probe(MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &status);  printf("Proc 2: New message...\n");  if (status.MPI_SOURCE == 0)  {  MPI_Recv(i, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);  }  else  {  MPI_Recv(x, 1, MPI_FLOAT, 1, tag, MPI_COMM_WORLD, &status);  }  printf("Proc 2: Received message from Proc %d successfully!!\n", status.MPI_SOURCE);  }  MPI_Finalize();//Finalize MPI  return 0;  }

Please visit our web site: MPI_PROBE (Fortran)  C/*  C * The proc 0 and proc 1 send to proc 2 separately,  C * proc 2 call MPI_PROBE to know whether the messages come,  C * if so, it calls MPI_RECV to receive the messages  C * Compile: mpif77 mpi_probe01.f -o mpi_probe01  C * Run: mpirun -np 3 mpi_probe01  C */  program main  include 'mpif.h'  integer rank, tag  integer ierr, n, i  real x  integer status(MPI_STATUS_SIZE)  tag = 0  CALL MPI_INIT(ierr)  CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)  IF (rank.EQ.0) THEN  print *, 'Proc 0: Sleep for 5 sec before send...'  CALL SLEEP(5)  print *, 'Proc 0: Sending message...'  CALL MPI_SEND(i, 1, MPI_INTEGER, 2, tag, MPI_COMM_WORLD, ierr)

Please visit our web site: MPI_PROBE (Fortran)  ELSE IF(rank.EQ.1) THEN  print *, 'Proc 1: Sleep for 3 sec before send...'  CALL SLEEP(3)  print *, 'Proc 1: Sending message...'  CALL MPI_SEND(x, 1, MPI_REAL, 2, tag, MPI_COMM_WORLD, ierr)  ELSE! rank.EQ.2  DO n=1, 2  print *, 'Proc 2: Wait for message comes...'  CALL MPI_PROBE(MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, status, ierr)  print *, 'Proc 2: New message...'  IF (status(MPI_SOURCE).EQ. 0) THEN  CALL MPI_RECV(i, 1, MPI_INTEGER, 0, tag, MPI_COMM_WORLD, status, ierr)  ELSE  CALL MPI_RECV(x, 1, MPI_REAL, 1, tag, MPI_COMM_WORLD, status, ierr)  END IF  print *, 'Proc 2: Received message from Proc ',status(MPI_SOURCE),' successfully!!'  END DO  END IF  call MPI_FINALIZE(ierr)  end

Please visit our web site: Point-to-Point Communication Nonblocking Functions

Please visit our web site: Nonblocking Send and Receive  A nonblocking send call indicates that the system may start copying data out of the send buffer. The sender should not access any part of the send buffer after a nonblocking send operation is called, until the send completes.  A nonblocking receive call indicates that the system may start writing data into the receive buffer. The receiver should not access any part of the receive buffer after a nonblocking receive operation is called, until the receive completes.

Please visit our web site: Nonblocking Functions – MPI_ISEND  Start a standard mode, nonblocking send (immediate send)  C int MPI_Isend( buf, count, datatype, dest, tag, comm, request )  void *buf;  int count, dest, tag;  MPI_Datatype datatype;  MPI_Comm comm;  MPI_Request *request; Input Parameters  buf: initial address of send buffer (choice)  count: number of elements in send buffer (non negative integer)  datatype: datatype of each send buffer element (handle)  dest: rank of destination (integer)  tag: message tag (integer)  comm: communicator (handle)

Please visit our web site: Nonblocking Functions – MPI_ISEND Output Parameters  request: communication request (handle)  The request can be used later to query the status of the communication or wait for its completion  Fortran MPI_ISEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR)

Please visit our web site: Nonblocking Functions – MPI_IBSEND  Start a buffered mode, nonblocking send  Parameters are the same as MPI_ISEND  C int MPI_Ibsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)  Fortran MPI_IBSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR

Please visit our web site: Nonblocking Functions – MPI_ISSEND  Start a synchronous mode, nonblocking send  Parameters are the same as MPI_ISSEND  C int MPI_Issend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)  Fortran MPI_ISSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR

Please visit our web site: Nonblocking Functions – MPI_IRSEND  Start a ready mode nonblocking send  Parameters are the same as MPI_ISSEND  C int MPI_Irsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)  Fortran MPI_IRSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR

Please visit our web site: Nonblocking Functions – MPI_IRECV  Start a nonblocking receive  C int MPI_Irecv(buf, count, datatype, source, tag, comm, request)  void *buf;  int count, source, tag;  MPI_Datatype datatype;  int source;  int tag;  MPI_Comm comm;  MPI_Request *request; Input Parameters  count: number of elements in receive buffer (integer)  datatype: datatype of each receive buffer element (handle)  source: rank of source (integer) MPI_ANY_SOURCE receives from any source in the communicator  tag: message tag (integer) MPI_ANY_TAG accepts any incoming message tag  comm: communicator (handle)

Please visit our web site: Nonblocking Functions – MPI_IRECV Output Parameters  buf: initial address of receive buffer (choice)  request: communication request (handle)  Fortran MPI_IRECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR) BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR

Please visit our web site: Nonblocking Functions – MPI_WAIT  Blocking call that completes the MPI_Isend() or MPI_Irecv() call  This call will make your process hang until the operation identified by the request is complete.  To follow MPI_Isend immediately with MPI_Wait is the same as to call MPI_Send. But splitting the latter into the former lets you do a number of other things between the calls to MPI_Isend and MPI_Wait.

Please visit our web site: Nonblocking Functions – MPI_WAIT  C int MPI_Wait(request, status)  MPI_Request *request;  MPI_Status *status Input/Output Parameter  request: request (handle) Output Parameter  status: status object (Status)  Fortran MPI_WAIT(REQUEST, STATUS, IERROR) INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR

Please visit our web site: Nonblocking Functions – MPI_TEST  Nonblocking call that tests for completion of an MPI_Isend() or MPI_Irecv() call.  Unlike MPI_Wait this call doesn't hang waiting for the communication request to get completed.  It returns right away with flag = true if the operation is complete and the value of request is set to MPI_REQUEST_NULL.  Otherwise flag = false and the value of request remains unchanged.  Most commonly you are likely to use MPI_Test in a loop: checking if the communication has completed, then doing something else, then checking again, and so on.

Please visit our web site: Nonblocking Functions – MPI_TEST  C int MPI_Test(request, flag, status)  MPI_Request *request  int *flag  MPI_Status *status Input/Output Parameter  request: communication request (handle) Output Parameter  flag: true if operation completed (logical)  status: status object (Status)  Fortran MPI_TEST(REQUEST, FLAG, STATUS, IERROR) LOGICAL FLAG INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR

Please visit our web site: Nonblocking Functions – MPI_IPROBE  MPI_IPROBE(source, tag, comm, flag, status) returns flag = true if there is a message that can be received and that matches the pattern specified by the arguments source, tag, and comm.  MPI_IPROBE behaves like MPI_PROBE except that it is a nonblocking call  C int MPI_Iprobe(int source, int tag, MPI_Comm comm, int *flag, MPI_Status *status) Input Parameters  source: source rank, or MPI_ANY_SOURCE (integer)  tag: tag value or MPI_ANY_TAG (integer)  comm: communicator (handle) Output Parameters  Flag: (logical)  status: status object (Status)  Fortran MPI_IPROBE(SOURCE, TAG, COMM, FLAG, STATUS, IERROR) LOGICAL FLAG INTEGER SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR

Please visit our web site: Notes on Nonblocking Communications  Typically used in situations where a lot of computations could be performed while a process is waiting for a send/receive to complete  Must insure arguments to send/receive are unmodified until completion  NOT a fast alternative to traditional send/receive  They won't hang a program  They can be interleaved with useful work

Please visit our web site: Nonblocking 1  Demonstrate the nonblocking communication using nonblocking send and receive, also using MPI_Wait to ensure the communication is complete  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Isend MPI_Irecv MPI_Wait MPI_Finalize

Please visit our web site: Nonblocking 1 (C)  /*  proc 0 use MPI_Isend to send message,  proc 1 use MPI_Irecv to receive message,  use MPI_Wait to make sure the messages are sent or received  */  #include // for input/output  #include //for mpi routines  #define BUFSIZE 2048//The size of the messege being passed  main(int argc, char** argv)  {  double start,finish;  int rank;//the rank of this process  int tag = 0;  float *sendbuf, *recvbuf;  MPI_Status status;//not important here  MPI_Request request;//not important here  sendbuf = (float *)malloc(sizeof(float)*BUFSIZE);  recvbuf = (float *)malloc(sizeof(float)*BUFSIZE);

Please visit our web site: Nonblocking 1 (C)  MPI_Init(&argc, &argv);//Initializing mpi  MPI_Comm_rank(MPI_COMM_WORLD, &rank);//Getting my rank  if( rank == 0 )  {  printf("Hello world! I am proc 0, sending to proc 1\n");  //send to the proc 1  MPI_Isend(sendbuf, BUFSIZE, MPI_FLOAT, 1, tag, MPI_COMM_WORLD, &request);  MPI_Wait (&request, &status);  }  else if( rank == 1)  {  //recieve from the proc 0  MPI_Irecv(recvbuf, BUFSIZE, MPI_FLOAT, 0, tag, MPI_COMM_WORLD, &request);  MPI_Wait (&request, &status);  }  printf("Proc %d finished!!\n", rank);  MPI_Finalize();  free(sendbuf);  free(recvbuf);  return 0;  }

Please visit our web site: Nonblocking 1 (Fortran) program main include 'mpif.h' integer ierr, rank real sendbuf(2048) real recvbuf(2048) integer count, tag, request, status call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) count = 2048 tag = 0 if (rank == 0) then print *, 'Hello world! I am proc ', rank, 1', sending to proc 1' call MPI_ISEND(sendbuf, count, MPI_REAL, 11, tag, MPI_COMM_WORLD, request, ierr) call MPI_WAIT(request, status, ierr) else if (rank == 1) then call MPI_IRECV(recvbuf, count, MPI_REAL, 10, tag, MPI_COMM_WORLD, request, ierr) call MPI_WAIT(request, status, ierr) endif print *, 'Proc ', rank, ' finished!!' call MPI_FINALIZE(ierr) end

Please visit our web site: Nonblocking 2  Demonstrate the nonblocking communication using nonblocking send and receive, also using MPI_Test to test whether the communication is complete  MPI Functions Used MPI_Init MPI_Comm_rank MPI_Isend MPI_Irecv MPI_Wait MPI_Test MPI_Finalize

Please visit our web site: Nonblocking 2 (C)  /*  proc 0 use MPI_Isend to send message,  proc 1 use MPI_Irecv to receive message,  */  #include // for input/output  #include //for mpi routines  #define BUFSIZE 2048//The size of the messege being passed  main(int argc, char** argv)  {  double start,finish;  int rank;//the rank of this process  int i = 0;  int tag = 0;  int flag = 0;  float *sendbuf, *recvbuf;  MPI_Status status;//not important here  MPI_Request request;//not important here  sendbuf = (float *)malloc(sizeof(float)*BUFSIZE);  recvbuf = (float *)malloc(sizeof(float)*BUFSIZE);  MPI_Init(&argc, &argv);//Initializing mpi

Please visit our web site: Nonblocking 2 (C)  MPI_Comm_rank(MPI_COMM_WORLD, &rank);//Getting my rank  if(rank == 0)  {  //sleep for 3 sec  sleep(3);  printf("Hello world! I am proc 0, sending to proc 1\n");  //send to the proc 1  MPI_Isend(sendbuf, BUFSIZE, MPI_FLOAT, 1, tag, MPI_COMM_WORLD, &request);  MPI_Wait (&request, &status);  }  else if(rank == 1)  {  //recieve from the proc 0  MPI_Irecv(recvbuf, BUFSIZE, MPI_FLOAT, 0, tag, MPI_COMM_WORLD, &request);  do  {  printf("Wait %d\n", i++);  MPI_Test (&request, &flag, &status);  }  while (flag == 0);  }  printf("Proc %d finished!!\n", rank);  MPI_Finalize();  free(sendbuf);  free(recvbuf);  return 0;  }

Please visit our web site: Nonblocking 2 (Fortran) program main include 'mpif.h' integer ierr, rank real sendbuf(2048) real recvbuf(2048) integer count, tag, request, status, i logical flag call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) count = 2048 tag = 0 flag =.FALSE. if (rank == 0) then call SLEEP(3) print *, 'Hello world! I am proc ', rank, 1', sending to proc 1 ‘

Please visit our web site: Nonblocking 2 (Fortran) call MPI_ISEND(sendbuf, count, MPI_REAL, 11, tag, MPI_COMM_WORLD, request, ierr) call MPI_WAIT(request, status, ierr) else if (rank == 1) then call MPI_IRECV(recvbuf, count, MPI_REAL, 10, tag, MPI_COMM_WORLD, request, ierr) DO WHILE (.NOT. flag) print *, 'Wait ', i call MPI_TEST(request, flag, status, ierr) i = i + 1 END DO endif print *, 'Proc ', rank, ' finished!!' call MPI_FINALIZE(ierr) end

Please visit our web site: Case Study  MPI_SEND and MPI_RECV Let ’ s look at 3 reasonable ways to perform communication between 2 processes which exchange messages  One always works  One always deadlocks That is, both processes hang waiting for the other to communicate  One may or may not work depending on the actual protocols used by the MPI implementation

Please visit our web site: Case Study – One Always Works  Algorithm: Determine what rank the process is If rank == 0  Send a message from send_buffer to process with rank 1  Receive a message into recv_buffer from process with rank 1 Else if rank == 1  Receive a message into recv_buffer from process with rank 0  Send a message from send_buffer to process with rank 0 Send first Receive next Receive first Send next Processer 0Processer 1 Time

Please visit our web site: Pseudo code – One Always Works  Determine the rank of the process  If rank == 0 then Send message to rank 1 Receive message from rank 1  Else if rank == 1 then Receive message from rank 0 Send message to rank 0  End  C casestudy01.c Compilation:  mpicc casestudy01.c – o casestudy01 Run:  mpirun – np 2 casestudy01  Fortran casestudy01.f Compilation:  mpif77 casestudy01.f – o casestudy01 Run:  mpirun – np 2 casestudy01

Please visit our web site: Case Study – One Always Deadlocks  Algorithm: Determine what rank the process is If rank == 0  Receive a message into recv_buffer from process with rank 1  Send a message from send_buffer to process with rank 1 Else if rank == 1  Receive a message into recv_buffer from process with rank 0  Send a message from send_buffer to process with rank 0 Receive first Send next Receive first Send next Processer 0Processer 1 Time

Please visit our web site: Pseudo code – One Always Deadlocks  Determine the rank of the process  If rank == 0 then Receive message from rank 1 Send message to rank 1  Else if rank == 1 then Receive message from rank 0 Send message to rank 0  End  C casestudy02.c Compilation:  mpicc casestudy02.c – o casestudy02 Run:  mpirun – np 2 casestudy02  Fortran casestudy02.f Compilation:  mpif77 casestudy02.f – o casestudy02 Run:  mpirun – np 2 casestudy02

Please visit our web site: Case Study – the Worst Case, One May or May Not Work  Algorithm: Determine what rank the process is If rank == 0  Send a message from send_buffer to process with rank 1  Receive a message into recv_buffer from process with rank 1 Else if rank == 1  Send a message from send_buffer to process with rank 0  Receive a message into recv_buffer from process with rank 0 Send first Receive next Send first Receive next Processer 0Processer 1 Time

Please visit our web site: Pseudo code – the Worst Case, One May or May Not Work  Determine the rank of the process  If rank == 0 then Send message to rank 1 Receive message from rank 1  Else if rank == 1 then Send message to rank 0 Receive message from rank 0  End  C casestudy03.c Compilation:  mpicc casestudy03.c – o casestudy03 Run:  mpirun – np 2 casestudy03  Fortran casestudy03.f Compilation:  mpif77 casestudy03.f – o casestudy03 Run:  mpirun – np 2 casestudy03

Please visit our web site: Reasons for Work and Deadlock  #The program was tested in LAM  In standard mode, it is up to MPI to decide whether outgoing messages will be buffered.  MPI may buffer outgoing messages (< 2048). In such a case, the send call may complete before a matching receive is invoked.  On the other hand (> 2048), buffer space may be unavailable, or MPI may choose not to buffer outgoing messages, for performance reasons. In this case, the send call will not complete until a matching receive has been posted, and the data has been moved to the receiver.

Please visit our web site: The End