20101 Chapter 7 The Application Layer Message Passing.

Slides:



Advertisements
Similar presentations
Parallel Virtual Machine Rama Vykunta. Introduction n PVM provides a unified frame work for developing parallel programs with the existing infrastructure.
Advertisements

MPI Message Passing Interface
Remote Procedure Call (RPC)
Remote Procedure Call Design issues Implementation RPC programming
RPC Remote Procedure Call Dave Hollinger Rensselaer Polytechnic Institute Troy, NY.
Tam Vu Remote Procedure Call CISC 879 – Spring 03 Tam Vu March 06, 03.
Copyright © 2001 Qusay H. Mahmoud RMI – Remote Method Invocation Introduction What is RMI? RMI System Architecture How does RMI work? Distributed Garbage.
GridRPC Sources / Credits: IRISA/IFSIC IRISA/INRIA Thierry Priol et. al papers.
Remote Procedure CallCS-4513, D-Term Remote Procedure Call CS-4513 Distributed Computing Systems (Slides include materials from Operating System.
CORBA Case Study By Jeffrey Oliver March March 17, 2003CORBA Case Study by J. T. Oliver2 History The CORBA (Common Object Request Broker Architecture)
CS490T Advanced Tablet Platform Applications Network Programming Evolution.
Introduction to Remote Method Invocation (RMI)
Introduction to PVM PVM (Parallel Virtual Machine) is a package of libraries and runtime daemons that enables building parallel apps easily and efficiently.
Practical Issues of RPCCS-4513, D-Term Remote Procedure Call Practical Issues CS-4513 Distributed Computing Systems (Slides include materials from.
Communication in Distributed Systems –Part 2
Netprog RPC Overview1 Distributed Program Design n Communication-Oriented Design –Design protocol first. –Build programs that adhere to the protocol.
Chapter 9 Message Passing Copyright © Operating Systems, by Dhananjay Dhamdhere Copyright © Operating Systems, by Dhananjay Dhamdhere2 Introduction.
UNIX SVR4 COSC513 Zhaohui Chen Jiefei Huang. UNIX SVR4 UNIX system V release 4 is a major new release of the UNIX operating system, developed by AT&T.
FALL 2005CSI 4118 – UNIVERSITY OF OTTAWA1 Part 4 Other Topics RPC & Middleware.
1 Chapter 38 RPC and Middleware. 2 Middleware  Tools to help programmers  Makes client-server programming  Easier  Faster  Makes resulting software.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
11 September 2008CIS 340 # 1 Topics To examine the variety of approaches to handle the middle- interaction (continued) 1.RPC-based systems 2.TP monitors.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Chapter 4: Interprocess Communication‏ Pages
1 Java RMI G53ACC Chris Greenhalgh. 2 Contents l Java RMI overview l A Java RMI example –Overview –Walk-through l Implementation notes –Argument passing.
The Socket Interface Chapter 21. Application Program Interface (API) Interface used between application programs and TCP/IP protocols Interface used between.
Java Remote Method Invocation RMI. Idea If objects communicate with each other on one JVM why not do the same on several JVM’s? If objects communicate.
1 Lecture 5 (part2) : “Interprocess communication” n reasons for process cooperation n types of message passing n direct and indirect message passing n.
RMI Remote Method Invocation Distributed Object-based System and RPC Together 2-Jun-16.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved RPC Tanenbaum.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 11.
CSE 451: Operating Systems Winter 2015 Module 22 Remote Procedure Call (RPC) Mark Zbikowski Allen Center 476 © 2013 Gribble, Lazowska,
Chapter 5: Distributed objects and remote invocation Introduction Remote procedure call Events and notifications.
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D.
Parallel and Distributed Programming Kashif Bilal.
Remote Procedure CallCS-502 Fall Remote Procedure Call (continued) CS-502, Operating Systems Fall 2007 (Slides include materials from Operating System.
PVM: Parallel Virtual Machine anonymous ftp ftp ftp.netlib.org cd pvm3/book get pvm-book.ps quit
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Remote Procedure Call RPC
1 Chapter 38 RPC and Middleware. 2 Middleware  Tools to help programmers  Makes client-server programming  Easier  Faster  Makes resulting software.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
1 Remote Procedure Calls External Data Representation (Ch 19) RPC Concept (Ch 20)
1 RMI Russell Johnston Communications II. 2 What is RMI? Remote Method Invocation.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 12.
Distributed Systems Lecture 8 RPC and marshalling 1.
Computer Science Lecture 4, page 1 CS677: Distributed OS Last Class: RPCs RPCs make distributed computations look like local computations Issues: –Parameter.
Distributed Computing & Embedded Systems Chapter 4: Remote Method Invocation Dr. Umair Ali Khan.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 6, 2006 Session 22.
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
Operating Systems {week 11a}
Java Distributed Computing
Client-Server Communication
03 – Remote invoaction Request-reply RPC RMI Coulouris 5
Prof. Leonardo Mostarda University of Camerino
Java Distributed Computing
CSE 451: Operating Systems Winter 2006 Module 20 Remote Procedure Call (RPC) Ed Lazowska Allen Center
DISTRIBUTED COMPUTING
Lecture 4: RPC Remote Procedure Call Coulouris et al: Chapter 5
CSE 451: Operating Systems Autumn 2003 Lecture 16 RPC
CSE 451: Operating Systems Winter 2007 Module 20 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Lecture 4: RPC Remote Procedure Call CDK: Chapter 5
Distributed Program Design
CSE 451: Operating Systems Winter 2004 Module 19 Remote Procedure Call (RPC) Ed Lazowska Allen Center
CSE 451: Operating Systems Spring 2012 Module 22 Remote Procedure Call (RPC) Ed Lazowska Allen Center
CSE 451: Operating Systems Autumn 2009 Module 21 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Remote invocation (call)
CSE 451: Operating Systems Autumn 2010 Module 21 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Lecture 7: RPC (exercises/questions)
CSE 451: Operating Systems Winter 2003 Lecture 16 RPC
CSE 451: Operating Systems Messaging and Remote Procedure Call (RPC)
Presentation transcript:

20101 Chapter 7 The Application Layer Message Passing

20102 Message passing Message passing is a form of interaction between two or more processes, which provides both communication and synchronization: a message can only be received after it has been sent. The actual function of message passing is usually provided with 2 primitives: send (destination, message) and receive (source, message). After a send the process can continue directly (non-blocking) or after the message has been received (blocking). Also a receive can be blocking or non-blocking. Usually the send is non-blocking, allowing the sender to send one or more messages to various destinations as quickly as possible. The receive is usually blocked, as the receiving process needs input data before it can do useful work.

20103 Parallel Virtual Machine PVM is a de-facto standard system for message passing, another is MPI.

20104 PVM principles User-configured host pool: selected by the user for a given run of the PVM program. The host pool may be altered by adding and deleting machines during operation (an important feature for fault tolerance). Translucent access to hardware: Application programs either may view the hardware environment as an attribute less collection of virtual processing elements or may choose to exploit the capabilities of specific machines in the host pool. Process-based computation: The unit of parallelism in PVM is a task (often but not always a Unix process). No process-to-processor mapping is implied or enforced by PVM; in particular, multiple tasks may execute on a single processor. Explicit message-passing model Tasks cooperate by explicitly sending and receiving messages to one another. Heterogeneity support: in terms of machines, networks, and applications. PVM permits messages containing more than one data type to be exchanged between machines having different data representations. Multiprocessor support: PVM uses the native message-passing facilities on multiprocessors to take advantage of the underlying hardware. Vendors often supply their own optimized PVM for their systems, which can still communicate with the public PVM version.

20105 Support for Resource Management add/delete hosts from a virtual machine Process Control spawn/kill tasks dynamically Message Passing blocking send, blocking and non-blocking receive, multicast messages Dynamic Task Groups task can join or leave a group at any time Fault Tolerance VM automatically detects faults and adjusts

20106 Popular PVM Uses Poor man’s Supercomputer –Beowulf (PC) clusters, Linux, Solaris, NT –Cobble together whatever resources you can get Metacomputer linking multiple Supercomputers ultimate performance: eg. have combined nearly 3000 processors and up to 53 supercomputers Education Tool teaching parallel programming academic and thesis research

20107 Message buffers int bufid = pvm_initsend( int encoding ) If the user is using only a single send buffer (and this is the typical case) then this is the only required buffer routine. The new buffer identifier is returned in bufid. The encoding options are as follows: PvmDataDefault -XDR encoding is used by default. This encodes integers, floats, etc. in a machine independent format, thus the message can be read by any machine in a heterogeneous environment. PvmDataRaw -no encoding is done. Messages are sent in their original format, only the same type of machine can read it. PvmDataInPlace - data left in place to save on packing costs. Buffer contains only sizes and pointers to the items to be sent. When pvm_send() is called, the items are copied directly out of the user's memory.

20108 Packing of data Each of the following C routines packs an array of the given data type into the active send buffer. They can be called multiple times to pack data into a single message. Thus, a message can contain several arrays each with a different data type. The arguments for each of the routines are a pointer to the first item to be packed, nitem which is the total number of items to pack from this array, and stride which is the stride to use when packing. A stride of 1 means a contiguous vector is packed, a stride of 2 means every other item is packed, and so on. int info = pvm_pkbyte( char *cp, int nitem, int stride ) int info = pvm_pkcplx( float *xp, int nitem, int stride ) int info = pvm_pkdcplx( double *zp, int nitem, int stride ) int info = pvm_pkdouble( double *dp, int nitem, int stride ) int info = pvm_pkfloat( float *fp, int nitem, int stride ) etc. There are also tools to ease packing of structures.

20109 Sending and receiving messages int info = pvm_send( int tid, int msgtag ) int info = pvm_mcast( int *tids, int ntask, int msgtag ) The routine pvm_send() labels the message with an positive integer identifier msgtag and sends it immediately to the process TID. The routine pvm_mcast() broadcasts the message to all tasks specified in the integer array tids (except itself). int bufid = pvm_recv( int tid, int msgtag ) This blocking receive routine will wait until a message with label msgtag has arrived from TID. A value of -1 in msgtag or TID matches anything (wildcard). It then places the message in a new active receive buffer that is created. Also a non-blocking and a time-out version of receive are provided. With pvm_probe() the receive queue can be checked to see if messages of certain type or sender have arrived. Functions which combine packing / sending and receiving / unpacking, are also provided. On multiprocessor systems they can often make better use of the native message passing facilities, so they can work faster

XPVM Graphical Console and Monitor XPVM provides a graphical interface to the PVM console commands and information, along with several animated views to monitor the execution of PVM programs. This is used to assist in debugging and performance tuning. XPVM provides point-and-click access to the PVM console commands. A pull-down menu allows users to add or delete hosts to configure the virtual machine. Tasks can be spawned using a dialog box that prompts for all spawn options, including the trace mask to determine which PVM routines to trace for XPVM. The Active color implies that at least one task on that host is busy executing useful work. The System color means that at least one task is busy executing PVM system routines.

Space-time view The Space- Time View shows the status of individual tasks as they execute across all hosts. The Computing color shows those times when the task is busy executing useful user computations. The Overhead color marks the places where the task executes PVM system routines for communication, task control, etc. The Waiting color indicates those time periods spent waiting for messages from other tasks.

Remote Procedure Calls For the calling program on the client this looks like a normal function, e.g. int func (parameter list). But instead of a local function a stub function is called, with passes information to a similar stub function on the server machine. There the function func (parameter list) is called, and its return value is transported back to the calling program on the client. Similar approaches are now also available for object oriented languages and systems.

SUN RPC One of the first available commercial examples. It is used in the Network File System (NFS) allowing workstations to easily use file systems on other workstations or servers. This consists of the following parts: 1.RPCGEN: a compiler that takes the definition of a remote procedure interface in a C like language, and generates the client stubs and the server stubs. 2.XDR (eXternal Data Representation): a standard way of encoding data in a portable fashion between different systems. It imposes a big-endian byte ordering and the minimum size of any field is 32 bits. This means that both the client and the server have to perform some translation. 3.A run-time library. The RPC protocol can be implemented on any transport protocol. In the case of TCP/IP, it can use either TCP or UDP as the transport vehicle. In case UDP is used, remember that this does not provide reliability, so it will be up to the caller program itself to ensure this (using timeouts and retransmissions, usually implemented in RPC library routines). Note that even with TCP, the caller program still needs a timeout routine to deal with exceptional situations such as a server crash.

RPC message The RPC call message consists of several fields: Identification: The remote program number identifies a functional group of procedures, for instance a file system, which would include individual procedures like read and write. The individual procedures are identified by a unique procedure number. As the remote program evolves, a version number is assigned to the different releases. Authentication fields. Two fields, credentials and verifier, are provided for the authentication of the caller to the service. It is up to the server to use this information for user authentication. Some authentication protocols are: Null authentication. UNIX authentication. DES authentication. Procedure parameters. Data (parameters) in XDR format passed to the remote procedure. Portmap is a server application (on port 111) that will map a program number and its version number to the TCP port number used by the program.

Client and server code

RMI (Remote Method Invocation) Objects are remote if they reside in a different JVM (Java Virtual Machine). If the marshalled parameters are local objects, they are passed by copy using object serialization. These objects must implement the java.io.Serializable interface. Remote objects are passed by reference. The stub and skeleton class files are generated using the rmic compiler Remote objects are defined by first declaring the interface that specifies the methods that may be invoked remotely. This interface must extend java.rmi.Remote, and each method must throw java.rmi.RemoteException. The implementation class must extend java.rmi.server.UnicastRemoteObject, allowing the creation of a single remote object that listens for network requests using RMI's default scheme of sockets for network communication. A client can get a reference to a remote object using the Naming.lookup(name) method, where the name is URL like: rmi://host/objectName.

CORBA CORBA allows heterogeneous client and server applications to communicate, e.g. a C++ program accessing a database written in COBOL. The stubs and skeletons are generated by an IDL compiler. Microsoft has its own standard Common Object Model (COM), the basis for Object Linking and Embedding (OLE).

(A)synchronous RPC