Non-Blocking I/O CS550 Operating Systems. Outline Continued discussion of semaphores from the previous lecture notes, as necessary. MPI Types What is.

Slides:



Advertisements
Similar presentations
Parallel Processing1 Parallel Processing (CS 667) Lecture 9: Advanced Point to Point Communication Jeremy R. Johnson *Parts of this lecture was derived.
Advertisements

Deadlocks, Message Passing Brief refresh from last week Tore Larsen Oct
1 Implementing Master/Slave Algorithms l Many algorithms have one or more master processes that send tasks and receive results from slave processes l Because.
1 Buffers l When you send data, where does it go? One possibility is: Process 0Process 1 User data Local buffer the network User data Local buffer.
Toward Efficient Support for Multithreaded MPI Communication Pavan Balaji 1, Darius Buntinas 1, David Goodell 1, William Gropp 2, and Rajeev Thakur 1 1.
High Performance Computing
Getting Started with MPI Self Test with solution.
Enforcing Mutual Exclusion Message Passing. Peterson’s Algorithm for Processes P0 and P1 void P0() { while( true ) { flag[ 0 ] = false; /* remainder */
Reference: / Point-to-Point Communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
1 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent.
S A B D C T = 0 S gets message from above and sends messages to A, C and D S.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms.
Parallel Programming – Process- Based Communication Operations David Monismith CS599 Based upon notes from Introduction to Parallel Programming, Second.
Univ. of TehranDistributed Operating Systems1 Advanced Operating Systems University of Tehran Dept. of EE and Computer Engineering By: Dr. Nasser Yazdani.
1 What is message passing? l Data transfer plus synchronization l Requires cooperation of sender and receiver l Cooperation not always apparent in code.
CS 179: GPU Programming Lecture 20: Cross-system communication.
Profile Guided MPI Protocol Selection for Point-to-Point Communication Calls 5/9/111 Aniruddha Marathe, David K. Lowenthal Department of Computer Science.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
1 Choosing MPI Alternatives l MPI offers may ways to accomplish the same task l Which is best? »Just like everything else, it depends on the vendor, system.
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
CY2003 Computer Systems Lecture 06 Interprocess Communication Monitors.
CS 484. Message Passing Based on multi-processor Set of independent processors Connected via some communication net All communication between processes.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
1 MPI Primer Lesson 10 2 What is MPI MPI is the standard for multi- computer and cluster message passing introduced by the Message-Passing Interface.
MPI Communications Point to Point Collective Communication Data Packaging.
Performance Oriented MPI Jeffrey M. Squyres Andrew Lumsdaine NERSC/LBNL and U. Notre Dame.
MPI Send/Receive Blocked/Unblocked Tom Murphy Director of Contra Costa College High Performance Computing Center Message Passing Interface BWUPEP2011,
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
Non-Data-Communication Overheads in MPI: Analysis on Blue Gene/P P. Balaji, A. Chan, W. Gropp, R. Thakur, E. Lusk Argonne National Laboratory University.
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-4 Process Communication Department of Computer Science and Software.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Its.unc.edu 1 University of North Carolina - Chapel Hill ITS Research Computing Instructor: Mark Reed Point to Point Communication.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
Stacey Levine Chapter 4.1 Message Passing Communication.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
Gokul Kishan CS8 1 Inter-Process Communication (IPC)
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
An Introduction to Parallel Programming with MPI February 17, 19, 24, David Adams
CS 4410 – Parallel Computing 1 Chap 9 CS 4410 – Parallel Computing Dr. Dave Gallagher Chap 9 Manager Worker.
MPI_Alltoall By: Jason Michalske. What is MPI_Alltoall? Each process sends distinct data to each receiver. The Jth block of process I is received by process.
CS4402 – Parallel Computing
EEC 688/788 Secure and Dependable Computing
MPI Point to Point Communication
Last Class: RPCs and RMI
Blocking / Non-Blocking Send and Receive Operations
More on MPI Nonblocking point-to-point routines Deadlock
Inter Process Communication (IPC)
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
EEC 688/788 Secure and Dependable Computing
May 19 Lecture Outline Introduce MPI functionality
Quiz Questions ITCS 4145/5145 Parallel Programming MPI
More on MPI Nonblocking point-to-point routines Deadlock
Barriers implementations
More Quiz Questions Parallel Programming MPI Non-blocking, synchronous, asynchronous message passing routines ITCS 4/5145 Parallel Programming, UNC-Charlotte,
More Quiz Questions Parallel Programming MPI Collective routines
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Non-Blocking I/O CS550 Operating Systems

Outline Continued discussion of semaphores from the previous lecture notes, as necessary. MPI Types What is non-blocking I/O? Review of MPI_Irecv and MPI_Isend Example Code

MPI Types MPI_CHARMPI_LONG MPI_SHORTMPI_FLOAT MPI_INTMPI_DOUBLE Many other types exist These types are analogous to C primitive types See the MPI Reference Manual for more examples

Blocking I/O In blocking I/O, when a message is sent, a process waits until it has acknowledgement that the message has been received before it can continue processing. Similarly, when a message is requested (a receive method/function is called) the program waits until the message has been received before continuing processing.

Blocking I/O Example Process1 Process send msg |MPI_Send | > |MPI_Recv | |wait for ack| |wait for msg| |ack received| < |ack receipt | |3b.continue | 2.send ack |3a.continue |

Non-Blocking I/O Non-blocking I/O allows for messages to be sent or requested for receipt without waiting for an acknowledgement that the message has been received. This means that programs may continue processing immediately after sending a message or after requesting that a message be received.

Non-Blocking I/O Example Process1 Process send msg |MPI_Isend | > |MPI_Irecv | |2a.continue | |2b.continue | | | |3. Do WORK | | | |MPI_Wait | | | |4.wait on msg| | | |5.work on msg|

Non-Blocking I/O Example 2 Process1 Process | | |MPI_Irecv | |1a.Do WORK | |1b.continue | | | |2. Do WORK | | | |MPI_Wait | | MPI_Isend | 5.send msg |4.wait on msg| | | > |6.work on msg|

MPI Non-blocking I/O Functions MPI_Get_count – used to determine the length of a received message MPI_Irecv – non-blocking receive MPI_Isend – non-blocking send MPI_Wait – waits for one message to be received after performing a non-blocking send. MPI_Waitall – waits for all specified messages (e.g. a list of messages) to be received. MPI_Wait_some – waits for at least one message out of a list of messages to be received, then continues processing

Example Code See the non-blocking I/O examples from the course webpage