Generalized Requests. The current definition They are defined in MPI 2 under the hood of the chapter 8 (External Interfaces) Page 166 line 16 The objective.

Slides:



Advertisements
Similar presentations
MPI 2.2 William Gropp. 2 Scope of MPI 2.2 Small changes to the standard. A small change is defined as one that does not break existing correct MPI 2.0.
Advertisements

1 Tuning for MPI Protocols l Aggressive Eager l Rendezvous with sender push l Rendezvous with receiver pull l Rendezvous blocking (push or pull)
1 What is message passing? l Data transfer plus synchronization l Requires cooperation of sender and receiver l Cooperation not always apparent in code.
MPI Message Passing Interface
1 Computer Science, University of Warwick Accessing Irregularly Distributed Arrays Process 0’s data arrayProcess 1’s data arrayProcess 2’s data array Process.
3.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Process An operating system executes a variety of programs: Batch system.
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
1 Non-Blocking Communications. 2 #include int main(int argc, char **argv) { int my_rank, ncpus; int left_neighbor, right_neighbor; int data_received=-1;
CSCC69: Operating Systems
Java Card Technology Ch07: Applet Instructors: Fu-Chiung Cheng ( 鄭福炯 ) Associate Professor Computer Science & Engineering Computer Science & Engineering.
Chapter 7 Process Environment Chien-Chung Shen CIS, UD
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Multithreaded Programming.
Threads. Objectives To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
國立台灣大學 資訊工程學系 Chapter 4: Threads. 資工系網媒所 NEWS 實驗室 Objectives To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the.
Precept 3 COS 461. Concurrency is Useful Multi Processor/Core Multiple Inputs Don’t wait on slow devices.
Advanced OS Chapter 3p2 Sections 3.4 / 3.5. Interrupts These enable software to respond to signals from hardware. The set of instructions to be executed.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
4.7.1 Thread Signal Delivery Two types of signals –Synchronous: Occur as a direct result of program execution Should be delivered to currently executing.
1 I/O Management in Representative Operating Systems.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Chapter 4: Threads READ 4.1 & 4.2 NOT RESPONSIBLE FOR 4.3 &
INPUT/OUTPUT ORGANIZATION INTERRUPTS CS147 Summer 2001 Professor: Sin-Min Lee Presented by: Jing Chen.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 4: Threads Overview Multithreading Models Threading Issues.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Chapter 4: Threads Adapted to COP4610 by Robert van Engelen.
14.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 4: Threads.
Chapter 9 Message Passing Copyright © Operating Systems, by Dhananjay Dhamdhere Copyright © Operating Systems, by Dhananjay Dhamdhere2 Introduction.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Threads.
Chapter 4: Threads. 4.2CSCI 380 Operating Systems Chapter 4: Threads Overview Multithreading Models Threading Issues Pthreads Windows XP Threads Linux.
14.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 4: Threads.
Silberschatz, Galvin and Gagne ©2011Operating System Concepts Essentials – 8 th Edition Chapter 4: Threads.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
MICROPROCESSOR INPUT/OUTPUT
FINAL MPX DELIVERABLE Due when you schedule your interview and presentation.
1 Choosing MPI Alternatives l MPI offers may ways to accomplish the same task l Which is best? »Just like everything else, it depends on the vendor, system.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Operating Systems Lecture 7 OS Potpourri Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of Software.
Threads and Thread Control Thread Concepts Pthread Creation and Termination Pthread synchronization Threads and Signals.
MPI-3 Hybrid Working Group Status. MPI interoperability with Shared Memory Motivation: sharing data between processes on a node without using threads.
Source: Operating System Concepts by Silberschatz, Galvin and Gagne.
15.1 Threads and Multi- threading Understanding threads and multi-threading In general, modern computers perform one task at a time It is often.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 4: Threads Modified from the slides of the text book. TY, Sept 2010.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Chapter 4: Threads Overview Multithreading.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Threads.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
4P13 Week 12 Talking Points Device Drivers 1.Auto-configuration and initialization routines 2.Routines for servicing I/O requests (the top half)
Implementing Remote Procedure Calls Andrew D. Birrell and Bruce Jay Nelson Xerox Palo Alto Research Center Published: ACM Transactions on Computer Systems,
CMSC 202 Computer Science II for Majors. CMSC 202UMBC Topics Exceptions Exception handling.
Interrupts and Exception Handling. Execution We are quite aware of the Fetch, Execute process of the control unit of the CPU –Fetch and instruction as.
Chapter 7 Process Environment Chien-Chung Shen CIS/UD
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Chapter 4 – Thread Concepts
OPERATING SYSTEM CONCEPT AND PRACTISE
Chapter 4: Threads.
Chapter 4 – Thread Concepts
Threads CSSE 332 Operating Systems Rose-Hulman Institute of Technology
Principles of Operating Systems Lecture 8
Chapter 4: Threads.
Chapter 4: Threads.
Modified by H. Schulzrinne 02/15/10 Chapter 4: Threads.
I/O Systems I/O Hardware Application I/O Interface
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Chapter 4: Threads.
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

Generalized Requests

The current definition They are defined in MPI 2 under the hood of the chapter 8 (External Interfaces) Page 166 line 16 The objective of this MPI-2 addition is to allow users of MPI to be able to create new nonblocking operations with an interface similar to what is present in MPI. This can be used to layer new functionality on top of MPI.

More info Such an outstanding nonblocking operation is represented by a (generalized) request. A fundamental property of nonblocking operations is that progress toward the completion of this operation occurs asynchronously, i.e., concurrently with normal program execution. Typically, this requires execution of code concurrently with the execution of the user code, e.g., in a separate thread or in a signal handler. Operating systems provide a variety of mechanisms in support of concurrent execution. MPI does not attempt to standardize or replace these mechanisms: it is assumed programmers who wish to define new asynchronous operations will use the mechanisms provided by the underlying operating system.

MPI Role The generalized requests are entirely managed by the application/library except for one state. The completion is managed by MPI, the application should use MPI_GREQUEST_COMPLETE to advertise the completion of the generalized request.

Generalized requests MPI_GREQUEST_START(query_fn, free_fn, cancel_fn, extra_state, request) o IN query_fn callback function invoked when request status is queried (callback function) o IN free_fn callback function invoked when request is freed (callback function) o IN cancel_fn callback function invoked when request is cancelled (callback function) o IN extra_state extra state o OUT request generalized request (handle)

The query function typedef int MPI_Grequest_query_function( void *extra_state, MPI Status *status); query_fn function computes the status that should be returned for the generalized request. The status also includes information about successful/unsuccessful cancellation of the request. query fn callback is invoked by the MPI_{WAIT|TEST}_{ANY|SOME|ALL} call that completed the generalized request associated with this callback. Note that query_fn is invoked only after MPI_GREQUEST_COMPLETE is called on the request; it may be invoked several times for the same generalized request, e.g., if the user calls MPI_REQUEST_GET_STATUS several times for this request.

The free function typedef int MPI_Grequest_free_function(void *extra_state); free_fn function is invoked to clean up user-allocated resources when the generalized request is freed. free_fn callback is invoked by the MPI_{WAIT|TEST}_{ANY|SOME|ALL} call that completed the generalized request associated with this callback. free_fn is invoked after the call to query_fn for the same request.

The cancel function typedef int MPI_Grequest_cancel_function( void *extra state, int complete); cancel_fn function is invoked to start the cancellation of a generalized request. It is called by MPI_REQUEST_CANCEL(request). MPI passes to the callback function complete=true if MPI_GREQUEST_COMPLETE was already called on the request, and complete=false otherwise.

Possible improvement(s) … 15 January :30PM - 3:30 PM in parallel with the non-blocking collective session. Euro PVM/MPI 2007 :Extending the MPI-2 Generalized Request Interface Robert Latham, William Gropp, Robert Ross, and Rajeev Thakur

Possible improvements - I They propose an MPIX_GREQUEST_START function similar to MPI_GREQUEST_START, but taking an additional function pointer (poll_fn) that allows the MPI implementation to make progress on pending generalized requests. When the MPI implementation tests or waits for completion of a generalized request, the poll function will provide a hook for the appropriate AIO completion function. All test and wait routines (MPI_{TEST|WAIT}_{ALL|ANY|SOME}) begin by calling the requests user-provided poll_fn. For the wait routines, the program continues to call poll_fn until either at least one request completes (wait, waitany, waitsome) or all requests complete (wait, waitall).

Possible improvements - II We can give implementations more room for optimization if we introduce the concept of a generalized request class. MPIX_REQUEST_CLASS_CREATE would take the generalized request query, free, and cancel function pointers already defined in MPI-2 along with our proposed poll function pointer. The routine would also take a newwait function pointer. Initiating a generalized request then reduces to instantiating a request of the specified class via a call to MPIX_GREQUEST_CLASS_ALLOCATE.

The short term plan Continue on the Proposal II: add a class, find a better name, add a poll and a wait function, insure similarities between their arguments, send a proposal on the mailing list (in about a week). Open question: How the progress is now made and when ?

Progress modes 3 acceptable progress modes: –Signal like approach: the library use MPI_GREQUEST_COMPLETE to mark the generalized request as completed –Non-blocking approach: poll can be used to check for any progress (based on an array of generalized requests) –Blocking approach: the library is allowed to block until one (some) of the generalized requests passed in the array argument is completed