Copyright © 1995-2006 Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Advanced Operating Systems Lecture.

Slides:



Advertisements
Similar presentations
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Fall 2003 URL: Distributed System Architectures.
Advertisements

RPC Robert Grimm New York University Remote Procedure Calls.
Remote Procedure Call (RPC)
Remote Procedure Call Design issues Implementation RPC programming
User-Level Interprocess Communication for Shared Memory Multiprocessors Bershad, B. N., Anderson, T. E., Lazowska, E.D., and Levy, H. M. Presented by Akbar.
Tam Vu Remote Procedure Call CISC 879 – Spring 03 Tam Vu March 06, 03.
Fast Communication Firefly RPC Lightweight RPC  CS 614  Tuesday March 13, 2001  Jeff Hoy.
Remote Procedure CallCS-4513, D-Term Remote Procedure Call CS-4513 Distributed Computing Systems (Slides include materials from Operating System.
Lightweight Remote Procedure Call Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented by Alana Sweat.
Distributed Processing, Client/Server, and Clusters
Chapter 16 Client/Server Computing Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
G Robert Grimm New York University Lightweight RPC.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Winter CMPE 155 Week 6. Winter Yet more servers… NFS and NIS.
3.5 Interprocess Communication
USER LEVEL INTERPROCESS COMMUNICATION FOR SHARED MEMORY MULTIPROCESSORS Presented by Elakkiya Pandian CS 533 OPERATING SYSTEMS – SPRING 2011 Brian N. Bershad.
Computer Science Lecture 2, page 1 CS677: Distributed OS Last Class: Introduction Distributed Systems – A collection of independent computers that appears.
Distributed Resource Management: Distributed Shared Memory
CS 582 / CMPE 481 Distributed Systems Communications (cont.)
Winter 2005 CMPE 151: Network Administration Servers II.
Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Administration Class web-page: suraj.lums.edu.pk/~cs582a03.
User-Level Interprocess Communication for Shared Memory Multiprocessors Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented.
Remote Procedure Calls. 2 Client/Server Paradigm Common model for structuring distributed computations A server is a program (or collection of programs)
PRASHANTHI NARAYAN NETTEM.
CS533 Concepts of Operating Systems Class 9 Lightweight Remote Procedure Call (LRPC) Rizal Arryadi.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM.
Networked File System CS Introduction to Operating Systems.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Remote Procedure Calls Adam Smith, Rodrigo Groppa, and Peter Tonner.
Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Advanced Operating Systems Lecture notes Dr.
 Remote Procedure Call (RPC) is a high-level model for client-sever communication.  It provides the programmers with a familiar mechanism for building.
Operating Systems David Goldschmidt, Ph.D. Computer Science The College of Saint Rose CIS 432.
Ch 10 Shared memory via message passing Problems –Explicit user action needed –Address spaces are distinct –Small Granularity of Transfer Distributed Shared.
Distributed Shared Memory Based on Reference paper: Distributed Shared Memory, Concepts and Systems.
CSE 451: Operating Systems Winter 2015 Module 22 Remote Procedure Call (RPC) Mark Zbikowski Allen Center 476 © 2013 Gribble, Lazowska,
Distributed Computing A Programmer’s Perspective.
Middleware Services. Functions of Middleware Encapsulation Protection Concurrent processing Communication Scheduling.
Chapter 5: Distributed objects and remote invocation Introduction Remote procedure call Events and notifications.
LRPC Firefly RPC, Lightweight RPC, Winsock Direct and VIA.
Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy. Presented by: Tim Fleck.
Copyright © Clifford Neuman - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Advanced Operating Systems Lecture notes Dr.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Manish Kumar,MSRITSoftware Architecture1 Remote procedure call Client/server architecture.
Computer Science Lecture 3, page 1 CS677: Distributed OS Last Class: Communication in Distributed Systems Structured or unstructured? Addressing? Blocking/non-blocking?
Distributed shared memory u motivation and the main idea u consistency models F strict and sequential F causal F PRAM and processor F weak and release.
Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Advanced Operating Systems Lecture.
Implementing Remote Procedure Calls Andrew D. Birrell and Bruce Jay Nelson Xerox Palo Alto Research Center Published: ACM Transactions on Computer Systems,
Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Advanced Operating Systems Lecture.
Background Computer System Architectures Computer System Software.
© Oxford University Press 2011 DISTRIBUTED COMPUTING Sunita Mahajan Sunita Mahajan, Principal, Institute of Computer Science, MET League of Colleges, Mumbai.
Computer Science Lecture 4, page 1 CS677: Distributed OS Last Class: RPCs RPCs make distributed computations look like local computations Issues: –Parameter.
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
03 – Remote invoaction Request-reply RPC RMI Coulouris 5
CSE 451: Operating Systems Winter 2006 Module 20 Remote Procedure Call (RPC) Ed Lazowska Allen Center
DISTRIBUTED COMPUTING
Outline Midterm results summary Distributed file systems – continued
CSE 451: Operating Systems Autumn 2003 Lecture 16 RPC
CSE 451: Operating Systems Winter 2007 Module 20 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Advanced Operating Systems Lecture notes
Advanced Operating Systems Lecture notes
CSE 451: Operating Systems Winter 2004 Module 19 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Remote Procedure Call Hank Levy 1.
Presented by: SHILPI AGARWAL
Remote Procedure Call Hank Levy 1.
Distributed Resource Management: Distributed Shared Memory
CSE 451: Operating Systems Winter 2003 Lecture 16 RPC
Distributed Systems CS
Remote Procedure Call Hank Levy 1.
Last Class: Communication in Distributed Systems
CSE 451: Operating Systems Messaging and Remote Procedure Call (RPC)
Presentation transcript:

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Advanced Operating Systems Lecture notes Dr. Clifford Neuman University of Southern California Information Sciences Institute

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Administration Class Office hours: No office hours today. Dr. Neuman is out of town. Reading report #1 will be posted by Tuesday Class Web page

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE CSci555: Advanced Operating Systems Lecture 2 – September 1, 2006 Dr. Tatyana Ryutov University of Southern California Information Sciences Institute

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Outline: Communications Models Communication Models: –General concepts. –Message passing. –Distributed shared memory (DSM). –Remote procedure call (RPC) [Birrel et al.] ▪Light-weight RPC [Bershad et al.] –DSM case studies ▪IVY [Li et al.] ▪Linda [Carriero et al.]

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Communication Models Support for processes to communicate among themselves. Traditional (centralized) OS’s: –Provide local (within single machine) communication support. –Distributed OS’s: must provide support for communication across machine boundaries. ▪Over LAN or WAN.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Communication Paradigms 2 paradigms –Message Passing (MP) –Distributed Shared Memory (DSM) Message Passing –Processes communicate by sending messages. Distributed Shared Memory –Communication through a “virtual shared memory”.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Message Passing Basic communication primitives: –Send message. –Receive message. Modes of communication: –Synchronous versus asynchronous. Semantics: –Reliable versus unreliable.... Send Sending Q... Receiving Q Receive

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Synchronous Communication Blocking send –Blocks until message is transmitted –Blocks until message acknowledged Blocking receive –Waits for message to be received Process synchronization.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Asynchronous Communication Non-blocking send: sending process continues as soon message is queued. Blocking or non-blocking receive: –Blocking: ▪Timeout. ▪Threads. –Non-blocking: proceeds while waiting for message. ▪Message is queued upon arrival. ▪Process needs to poll or be interrupted.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Reliability of Communication Unreliable communication: –“best effort” - send and hope for the best –No ACKs or retransmissions. –Application must provide its own reliability. –Example: User Datagram Protocol (UDP) ▪Applications using UDP either don’t need reliability or build their own (e.g., UNIX NFS and DNS (both UDP and TCP), some audio or video applications)

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Reliability of Communication Reliable communication: –Different degrees of reliability. –Processes have some guarantee that messages will be delivered. –Example: Transmission Control Protocol (TCP) –Reliability mechanisms: ▪Positive acknowledgments (ACKs). ▪Negative Acknowledgments (NACKs). –Possible to build reliability atop unreliable service (E2E argument).

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Distributed Shared Memory Motivated by development of shared- memory multiprocessors which do share memory. Abstraction used for sharing data among processes running on machines that do not share memory. Processes think they read from and write to a “virtual shared memory”.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE DSM 2 Primitives: read and write. OS ensures that all processes see all updates. –Happens transparently to processes.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE DSM and MP DSM is an abstraction! –Gives programmers the flavor of a centralized memory system, which is a well-known programming environment. –No need to worry about communication and synchronization. But, it is implemented atop MP. –No physically shared memory. –OS takes care of required communication.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Caching in DSM For performance, DSM caches data locally. –More efficient access (locality). –But, must keep caches consistent. –Caching of pages for of page-based DSM. Issues: –Page size. –Consistency mechanism.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Approaches to DSM Hardware-based: –Multi-processor architectures with processor-memory modules connected by high-speed LAN (E.g., Stanford’s DASH). –Specialized hardware to handle reads and writes and perform required consistency mechanisms.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Approaches to DSM Page-based: –Example: IVY. –DSM implemented as region of processor’s virtual memory; occupies same address space range for every participating process. –OS keeps DSM data consistency as part of page fault handling.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Approaches to DSM Library-based: –Or language-based. –Example: Linda. –Language or language extensions. –Compiler inserts appropriate library calls whenever processes access DSM items. –Library calls access local data and communicate when necessary.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE DSM Case Studies: IVY Environment:”loosely coupled” multiprocessor. –Memory is physically distributed. –Memory mapping managers (OS kernel): ▪Map local memories to shared virtual space. ▪Local memory as cache of shared virtual space. ▪Memory reference may cause page fault; page retrieved and consistency handled.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE IVY Issues: –Read-only versus writable data. –Locality of reference. –Granularity (1 Kbyte page size). ▪Bigger pages versus smaller pages.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE IVY Memory coherence strategies: –Page synchronization ▪Invalidation ▪Write broadcast –Page ownership ▪Fixed: page always owned by same processor ▪Dynamic

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE IVY Page Synchronization Invalidation: –On write fault, invalidate all copies; give faulting process write access; gets copy of page if not already there. –Problem: must update page on reads. Write broadcast: –On write fault, fault handler writes to all copies. –Expensive!

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE IVY Memory Coherence Paper discusses approaches to memory coherence in page-based DSM. –Centralized: single manager residing on a single processor managing all pages. –Distributed: multiple managers on multiple processors managing subset of pages.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE DSM Case Studies: Linda Language-based approach to DSM. Environment: –Similar to IVY, ie, loosely coupled machines connected via fast broadcast bus. –Instead of shared address space, processes make library calls inserted by compiler when accessing DSM. –Libraries access local data and communicate to maintain consistency.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Linda DSM: tuple space. Basic operations: –out (data): data added to tuple space. –in (data): removes matching data from TS; destructive. –read (data): same as “in”, but tuple remains in TS (non-destructive).

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Linda Primitives: Examples out (“P”, 5, false) : tuple (“P”, 5, false) added to TS. –“P” : name –Other components are data values. –Implementation reported on the paper: every node stores complete copy of TS. –out (data) causes data to be broadcast to every node.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Linda Primitives: Examples in (“P”, int I, bool b): tuple (“P”, 5, false) removed from TS. –If matching tuple found locally, local kernel tries to delete tuple on all nodes.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Remote Procedure Call Builds on MP. Main idea: extend traditional (local) procedure call to perform transfer of control and data across network. Easy to use: analogous to local calls. But, procedure is executed by a different process, probably on a different machine. Fits very well with client-server model.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Mechanism 1. Invoke RPC. 2. Calling process suspends. 3. Parameters passed across network to target machine. 4. Procedure executed remotely. 5. When done, results passed back to caller. 6. Caller resumes execution. Is this synchronous or asynchronous?

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Advantages Easy to use. Well-known mechanism. Abstract data type –Client-server model. –Server as collection of exported procedures on some shared resource. –Example: file server. Reliable.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Semantics 1 Delivery guarantees. “Maybe call”: –Clients cannot tell for sure whether remote procedure was executed or not due to message loss, server crash, etc. –Usually not acceptable.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Semantics 2 “At-least-once” call: –Remote procedure executed at least once, but maybe more than once. –Retransmissions but no duplicate filtering. –Idempotent operations OK; e.g., reading data that is read-only.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Semantics 3 “At-most-once” call –Most appropriate for non-idempotent operations. –Remote procedure executed 0 or 1 time, ie, exactly once or not at all. –Use of retransmissions and duplicate filtering. –Example: Birrel et al. implementation. ▪Use of probes to check if server crashed.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Implementation (Birrel et al.) work CallerCallee Call packet Result User stub RPC runtime RPC runtime Server stub Server call pck args xmit rcv unpk call return pck result xmit rcv unpk result return

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Implementation 2 RPC runtime mechanism responsible for retransmissions, acknowledgments. Stubs responsible for data packaging and un-packaging; –AKA marshalling and un- marshalling: putting data in form suitable for transmission. Example: Sun’s XDR.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Binding How to determine where server is? Which procedure to call? –“Resource discovery” problem ▪Name service: advertises servers and services. ▪Example: Birrel et al. uses Grapevine. Early versus late binding. –Early: server address and procedure name hard-coded in client. –Late: go to name service.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE Synchronous & Asynchronous RPC Synchronous Asynchronous Client Server

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE RPC Performance Sources of overhead –data copying –scheduling and context switch. Light-Weight RPC –Shows that most invocations took place on a single machine. –LW-RPC: improve RPC performance for local case. –Optimizes data copying and thread scheduling for local case.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE LW-RPC 1 Argument copying –RPC: 4 times –copying between kernel and user space. –LW-RPC: common data area (A-stack) shared by client and server and used to pass parameters and results; access by client or server, one at a time.

Copyright © Clifford Neuman and Dongho Kim - UNIVERSITY OF SOUTHERN CALIFORNIA - INFORMATION SCIENCES INSTITUTE LW-RPC 2 A-stack avoids copying between kernel and user spaces. Client and server share the same thread: less context switch (like regular calls). A client user kernel server 1. copy args 2. traps 3. upcall 4. executes & returns