Copyright © 1995-2003 Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Administration Class web-page: suraj.lums.edu.pk/~cs582a03.

Slides:



Advertisements
Similar presentations
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Fall 2003 URL: Distributed System Architectures.
Advertisements

RPC Robert Grimm New York University Remote Procedure Calls.
Remote Procedure Call (RPC)
Remote Procedure Call Design issues Implementation RPC programming
Tam Vu Remote Procedure Call CISC 879 – Spring 03 Tam Vu March 06, 03.
Fast Communication Firefly RPC Lightweight RPC  CS 614  Tuesday March 13, 2001  Jeff Hoy.
Remote Procedure CallCS-4513, D-Term Remote Procedure Call CS-4513 Distributed Computing Systems (Slides include materials from Operating System.
Lightweight Remote Procedure Call Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented by Alana Sweat.
Distributed Processing, Client/Server, and Clusters
G Robert Grimm New York University Lightweight RPC.
CS533 - Concepts of Operating Systems 1 Remote Procedure Calls - Alan West.
CS 582 / CMPE 481 Distributed Systems Communications (cont.)
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Winter CMPE 155 Week 6. Winter Yet more servers… NFS and NIS.
USER LEVEL INTERPROCESS COMMUNICATION FOR SHARED MEMORY MULTIPROCESSORS Presented by Elakkiya Pandian CS 533 OPERATING SYSTEMS – SPRING 2011 Brian N. Bershad.
Computer Science Lecture 2, page 1 CS677: Distributed OS Last Class: Introduction Distributed Systems – A collection of independent computers that appears.
Distributed Resource Management: Distributed Shared Memory
CS 582 / CMPE 481 Distributed Systems Communications (cont.)
Winter 2005 CMPE 151: Network Administration Servers II.
Implementing Remote Procedure Calls an introduction to the fundamentals of RPCs, made during the advent of the technology. what is an RPC? what different.
User-Level Interprocess Communication for Shared Memory Multiprocessors Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented.
Remote Procedure Calls. 2 Client/Server Paradigm Common model for structuring distributed computations A server is a program (or collection of programs)
CS603 Communication Mechanisms 14 January Types of Communication Shared Memory Message Passing Stream-oriented Communications Remote Procedure Call.
PRASHANTHI NARAYAN NETTEM.
CS533 Concepts of Operating Systems Class 9 Lightweight Remote Procedure Call (LRPC) Rizal Arryadi.
What Can IP Do? Deliver datagrams to hosts – The IP address in a datagram header identify a host IP treats a computer as an endpoint of communication Best.
Distributed File Systems Concepts & Overview. Goals and Criteria Goal: present to a user a coherent, efficient, and manageable system for long-term data.
CS533 Concepts of Operating Systems Jonathan Walpole.
1 Chapter 2. Communication. STEM-PNU 2 Layered Protocol TCP/IP : de facto standard Our Major Concern Not always 7-layered Protocol But some other protocols.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Remote Procedure Calls Adam Smith, Rodrigo Groppa, and Peter Tonner.
Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Advanced Operating Systems Lecture.
Review Questions on Chapter IV—IPC COSC 4330/6310 Summer 2013.
Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Advanced Operating Systems Lecture notes Dr.
 Remote Procedure Call (RPC) is a high-level model for client-sever communication.  It provides the programmers with a familiar mechanism for building.
Operating Systems David Goldschmidt, Ph.D. Computer Science The College of Saint Rose CIS 432.
Distributed Shared Memory Based on Reference paper: Distributed Shared Memory, Concepts and Systems.
CSE 451: Operating Systems Winter 2015 Module 22 Remote Procedure Call (RPC) Mark Zbikowski Allen Center 476 © 2013 Gribble, Lazowska,
Middleware Services. Functions of Middleware Encapsulation Protection Concurrent processing Communication Scheduling.
Multimedia and Networks. Protocols (rules) Rules governing the exchange of data over networks Conceptually organized into stacked layers – Application-oriented.
Chapter 5: Distributed objects and remote invocation Introduction Remote procedure call Events and notifications.
GLOBAL EDGE SOFTWERE LTD1 R EMOTE F ILE S HARING - Ardhanareesh Aradhyamath.
Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy. Presented by: Tim Fleck.
09/14/05 1 Implementing Remote Procedure Calls* Birrell, A. D. and Nelson, B. J. Presented by Emil Constantinescu *ACM Trans. Comput. Syst. 2, 1 (Feb.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Manish Kumar,MSRITSoftware Architecture1 Remote procedure call Client/server architecture.
Computer Science Lecture 3, page 1 CS677: Distributed OS Last Class: Communication in Distributed Systems Structured or unstructured? Addressing? Blocking/non-blocking?
Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Advanced Operating Systems Lecture.
Implementing Remote Procedure Calls Andrew D. Birrell and Bruce Jay Nelson 1894 Xerox Palo Alto Research Center EECS 582 – W16.
Implementing Remote Procedure Calls Andrew D. Birrell and Bruce Jay Nelson Xerox Palo Alto Research Center Published: ACM Transactions on Computer Systems,
1 Chapter 2. Communication. STEMPusan National University STEM-PNU 2 Layered Protocol TCP/IP : de facto standard Our Major Concern Not always 7-layered.
Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Advanced Operating Systems Lecture.
© Oxford University Press 2011 DISTRIBUTED COMPUTING Sunita Mahajan Sunita Mahajan, Principal, Institute of Computer Science, MET League of Colleges, Mumbai.
Computer Science Lecture 4, page 1 CS677: Distributed OS Last Class: RPCs RPCs make distributed computations look like local computations Issues: –Parameter.
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
03 – Remote invoaction Request-reply RPC RMI Coulouris 5
Ivy Eva Wu.
DISTRIBUTED COMPUTING
Outline Midterm results summary Distributed file systems – continued
CSE 451: Operating Systems Autumn 2003 Lecture 16 RPC
Advanced Operating Systems Lecture notes
Distributed Systems CS
Remote Procedure Call Hank Levy 1.
Presented by: SHILPI AGARWAL
Remote Procedure Call Hank Levy 1.
Distributed Resource Management: Distributed Shared Memory
CSE 451: Operating Systems Winter 2003 Lecture 16 RPC
Distributed Systems CS
Remote Procedure Call Hank Levy 1.
Last Class: Communication in Distributed Systems
Presentation transcript:

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Administration Class web-page: suraj.lums.edu.pk/~cs582a03 TAs –Zia ur Rahman

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Administration Class forum –Announcements –Questions –Answers –Participation

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE How to survive? Read the survival guide How to read papers –Read the papers in advance –At least skim through Build your own notes Study group

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE End-to-End Argument QUESTION: Where to place distributed systems functions? Layered system design: –Different levels of abstraction for simplicity. –Lower layer provides service to upper layer. –Very well defined interfaces.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE E2E Argument (continued) E2E paper argues that functions should be moved closer to the application that uses them.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE E2E Argument (continued) Rationale: –Some functions can only be completely and correctly implemented with application’s knowledge. ▪Example: –Reliable message delivery, security –Encrypted –Streaming media vs. Banking ▪Applications that do not need certain functions should not have to pay for them.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE E2E Counter-Argument Performance –Example: File transfer ▪Reliability checks at lower layers detect problems earlier. ▪Abort transfer and re-try without having to wait till whole file is transmitted. Abstraction –Less repetition across apps Bottom line: “spread” functionality across layers.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Outline: Communications Models Communication Models: –General concepts. –Message passing. –Distributed shared memory (DSM). –Remote procedure call (RPC) [Birrel et al.] ▪Light-weight RPC [Bershad et al.] –DSM case studies ▪IVY [Li et al.] ▪Linda [Carriero et al.]

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Communication Models Support for processes to communicate among themselves. Traditional (centralized) OS’s: –Provide local (within single machine) communication support. –Distributed OS’s: must provide support for communication across machine boundaries. ▪Over LAN or WAN.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Communication Paradigms 2 paradigms –Message Passing (MP) –Distributed Shared Memory (DSM) Message Passing –Processes communicate by sending messages. Distributed Shared Memory –Communication through a “virtual shared memory”.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Message Passing Basic communication primitives: –Send message. –Receive message. Modes of communication: –Synchronous versus asynchronous. Semantics: –Reliable versus unreliable.... Send Sending Q... Receiving Q Receive

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Synchronous Communication Blocking send –Blocks until message is transmitted –Blocks until message acknowledged Blocking receive –Waits for message to be received Process synchronization.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Asynchronous Communication Non-blocking send: sending process continues as soon message is queued. Blocking or non-blocking receive: –Blocking: ▪Timeout. ▪Threads. –Non-blocking: proceeds while waiting for message. ▪Message is queued upon arrival. ▪Process needs to poll or be interrupted.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Reliability of Communication Unreliable communication: –“best effort” - send and hope for the best –No ACKs or retransmissions. –Application must provide its own reliability. –Example: User Datagram Protocol (UDP) ▪Applications using UDP either don’t need reliability or build their own (e.g., UNIX NFS and DNS (both UDP and TCP), some audio or video applications)

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Reliability of Communication Reliable communication: –Different degrees of reliability. –Processes have some guarantee that messages will be delivered. –Example: Transmission Control Protocol (TCP) –Reliability mechanisms: ▪Positive acknowledgments (ACKs). ▪Negative Acknowledgments (NACKs). –Possible to build reliability atop unreliable service (E2E argument).

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Distributed Shared Memory Motivated by development of shared- memory multiprocessors which do share memory. Abstraction used for sharing data among processes running on machines that do not share memory. Processes think they read from and write to a “virtual shared memory”.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE DSM 2 Primitives: read and write. OS ensures that all processes see all updates. –Happens transparently to processes.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE DSM and MP DSM is an abstraction! –Gives programmers the flavor of a centralized memory system, which is a well-known programming environment. –No need to worry about communication and synchronization. But, it is implemented atop MP. –No physically shared memory. –OS takes care of required communication.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Caching in DSM For performance, DSM caches data locally. –More efficient access (locality). –But, must keep caches consistent. –Caching of pages for of page-based DSM. Issues: –Page size. –Consistency mechanism.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Approaches to DSM Hardware-based: –Multi-processor architectures with processor-memory modules connected by high-speed LAN (E.g., Stanford’s DASH, and FLASH). –Specialized hardware to handle reads and writes and perform required consistency mechanisms.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Approaches to DSM Page-based: –Example: IVY. –DSM implemented as region of processor’s virtual memory; occupies same address space range for every participating process. –OS keeps DSM data consistency as part of page fault handling.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Approaches to DSM Library-based: –Or language-based. –Example: Linda. –Language or language extensions. –Compiler inserts appropriate library calls whenever processes access DSM items. –Library calls access local data and communicate when necessary.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Environment:”loosely coupled” multiprocessor. –Memory is physically distributed. –Memory mapping managers (OS kernel): ▪Map local memories to shared virtual space. ▪Local memory as cache of shared virtual space. ▪Memory reference may cause page fault; page retrieved and consistency handled. DSM Case Studies: IVY P1 M1 mapping DSM P2 M2 mapping Pn Mn mapping

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE IVY Issues: –Read-only versus writable data. –Locality of reference. –Granularity (1 Kbyte page size). ▪Bigger pages versus smaller pages.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE IVY Memory coherence strategies: –Page synchronization ▪Invalidation ▪Write broadcast –Page ownership ▪Fixed: page always owned by same processor ▪Dynamic

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE IVY Page Synchronization Invalidation: –On write fault, invalidate all copies; give faulting process write access; gets copy of page if not already there. –Problem: must update page on reads. Write broadcast: –On write fault, fault handler writes to all copies. –Expensive!

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE IVY Memory Coherence Paper discusses approaches to memory coherence in page-based DSM. –Centralized: single manager residing on a single processor managing all pages. –Distributed: multiple managers on multiple processors managing subset of pages.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE DSM Case Studies: Linda Language-based approach to DSM. Environment: –Similar to IVY, ie, loosely coupled machines connected via fast broadcast bus. –Instead of shared address space, processes make library calls inserted by compiler when accessing DSM. –Libraries access local data and communicate to maintain consistency.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Linda DSM: tuple space. Basic operations: –out (data): data added to tuple space. –in (data): removes matching data from TS; destructive. –read (data): same as “in”, but tuple remains in TS (non-destructive).

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Linda Primitives: Examples out (“P”, 5, false) : tuple (“P”, 5, false) added to TS. –“P” : name –Other components are data values. –Implementation reported on the paper: every node stores complete copy of TS. –out (data) causes data to be broadcast to every node.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Linda Primitives: Examples in (“P”, int I, bool b): tuple (“P”, 5, false) removed from TS. –If matching tuple found locally, local kernel tries to delete tuple on all nodes.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Remote Procedure Call Builds on MP. Main idea: extend traditional (local) procedure call to perform transfer of control and data across network. Easy to use: analogous to local calls. But, procedure is executed by a different process, probably on a different machine. Fits very well with client-server model.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Mechanism 1. Invoke RPC. 2. Calling process suspends. 3. Parameters passed across network to target machine. 4. Procedure executed remotely. 5. When done, results passed back to caller. 6. Caller resumes execution. Is this synchronous or asynchronous?

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Advantages Easy to use. Well-known mechanism. Abstract data type –Client-server model. –Server as collection of exported procedures on some shared resource. –Example: file server. Reliable.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Semantics 1 Delivery guarantees. “Maybe call”: –Clients cannot tell for sure whether remote procedure was executed or not due to message loss, server crash, etc. –Usually not acceptable.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Semantics 2 “At-least-once” call: –Remote procedure executed at least once, but maybe more than once. –Retransmissions but no duplicate filtering. –Idempotent operations OK; e.g., reading data that is read-only.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Semantics 3 “At-most-once” call –Most appropriate for non-idempotent operations. –Remote procedure executed 0 or 1 time, ie, exactly once or not at all. –Use of retransmissions and duplicate filtering. –Example: Birrel et al. implementation. ▪Use of probes to check if server crashed.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Implementation work CallerCallee Call packet Result User stub RPC runtime RPC runtime Server stub Server call pck args xmit rcv unpk call return pck result xmit rcv unpk result return

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Implementation 2 RPC runtime mechanism responsible for retransmissions, acknowledgments. Stubs responsible for data packaging and un-packaging; –AKA marshalling and un- marshalling: putting data in form suitable for transmission. Example: Sun’s XDR.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Binding How to determine where server is? Which procedure to call? –“Resource discovery” problem ▪Name service: advertises servers and services. ▪Example: Birrel et al. uses Grapevine. Early versus late binding. –Early: server address and procedure name hard-coded in client. –Late: go to name service.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Synchronous & Asynchronous RPC Synchronous Asynchronous Client Server

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Performance Sources of overhead –data copying –scheduling and context switch. Light-Weight RPC –Shows that most invocations took place on a single machine. –LW-RPC: improve RPC performance for local case. –Optimizes data copying and thread scheduling for local case.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE LW-RPC 1 Argument copying –RPC: 4 times (2 on call and 2 on return); copying between kernel and user space. –LW-RPC: common data area (A-stack) shared by client and server and used to pass parameters and results; access by client or server, one at a time.

Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE LW-RPC 2 A-stack avoids copying between kernel and user spaces. Client and server share the same thread: less context switch (like regular calls). A client user kernel server 1. copy args 2. traps 3. upcall 4. executes & returns