CMPT 401 Summer 2007 Dr. Alexandra Fedorova Lecture XVIII: Concluding Remarks.

Slides:



Advertisements
Similar presentations
File-System Interface
Advertisements

Executional Architecture
Distributed Systems Major Design Issues Presented by: Christopher Hector CS8320 – Advanced Operating Systems Spring 2007 – Section 2.6 Presentation Dr.
© 2005 P. Kouznetsov Computing with Reads and Writes in the Absence of Step Contention Hagit Attiya Rachid Guerraoui Petr Kouznetsov School of Computer.
Department of Computer Science and Engineering University of Washington Brian N. Bershad, Stefan Savage, Przemyslaw Pardyak, Emin Gun Sirer, Marc E. Fiuczynski,
Remote Procedure Call Design issues Implementation RPC programming
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
Distributed Systems 1 Topics  What is a Distributed System?  Why Distributed Systems?  Examples of Distributed Systems  Distributed System Requirements.
Tam Vu Remote Procedure Call CISC 879 – Spring 03 Tam Vu March 06, 03.
15-May-15 RMI Remote Method Invocation. 2 “The network is the computer” Consider the following program organization: If the network is the computer, we.
Remote Procedure CallCS-4513, D-Term Remote Procedure Call CS-4513 Distributed Computing Systems (Slides include materials from Operating System.
Advanced Programming Rabie A. Ramadan Lecture 4. A Simple Use of Java Remote Method Invocation (RMI) 2.
The road to reliable, autonomous distributed systems
Distributed Systems Lecture #3: Remote Communication.
CMPT 431 Dr. Alexandra Fedorova Lecture VIII: Time And Global Clocks.
Tutorials 2 A programmer can use two approaches when designing a distributed application. Describe what are they? Communication-Oriented Design Begin with.
Lecture 4 Remote Procedure Calls (cont). EECE 411: Design of Distributed Software Applications [Last time] Building Distributed Applications: Two Paradigms.
1 Concurrent and Distributed Systems Introduction 8 lectures on concurrency control in centralised systems - interaction of components in main memory -
© 2006 Pearson Addison-Wesley. All rights reserved4-1 Chapter 4 Data Abstraction: The Walls.
University of Kansas Construction & Integration of Distributed Systems Jerry James Oct. 30, 2000.
CMSC 414 Computer and Network Security Lecture 9 Jonathan Katz.
Apache Axis: A Set of Java Tools for SOAP Web Services.
© Janice Regan, CMPT 102, Sept CMPT 102 Introduction to Scientific Computer Programming Pointers.
Communication in Distributed Systems –Part 2
16: Distributed Systems1 DISTRIBUTED SYSTEM STRUCTURES NETWORK OPERATING SYSTEMS The users are aware of the physical structure of the network. Each site.
Lecture 8 Epidemic communication, Server implementation.
CSE 490dp Resource Control Robert Grimm. Problems How to access resources? –Basic usage tracking How to measure resource consumption? –Accounting How.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Types for Programs and Proofs Lecture 1. What are types? int, float, char, …, arrays types of procedures, functions, references, records, objects,...
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Architectures of distributed systems Fundamental Models
Problems with Send and Receive Low level –programmer is engaged in I/O –server often not modular –takes 2 calls to get what you want (send, followed by.
Java Threads 11 Threading and Concurrent Programming in Java Introduction and Definitions D.W. Denbo Introduction and Definitions D.W. Denbo.
Advanced Computer Networks Topic 2: Characterization of Distributed Systems.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development 3.
 Remote Procedure Call (RPC) is a high-level model for client-sever communication.  It provides the programmers with a familiar mechanism for building.
Exception Handling Programmers must deal with errors and exceptional situations: User input errors Device errors Empty disk space, no memory Component.
CORBA1 Distributed Software Systems Any software system can be physically distributed By distributed coupling we get the following:  Improved performance.
Shuman Guo CSc 8320 Advanced Operating Systems
Presentation 3: Designing Distributed Objects. Ingeniørhøjskolen i Århus Slide 2 af 14 Outline Assumed students are knowledgeable about OOP principles.
Remote Procedure Call RPC
CSE 60641: Operating Systems Next topic: CPU (Process/threads/scheduling, synchronization and deadlocks) –Why threads are a bad idea (for most purposes).
CSE 374 Programming Concepts & Tools Hal Perkins Fall 2015 Lecture 10 – C: the heap and manual memory management.
Manish Kumar,MSRITSoftware Architecture1 Remote procedure call Client/server architecture.
ICOM 4035 – Data Structures Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 2 – August 23, 2001.
The article collection PRIS F7 Fredrik Kilander. Content “On agent-based software engineering” Nick Jennings, 1999 “An agent-based approach for building.
Locking In CFML. Locking in CFML - Why - How - What - When } to lock? Understand Locking.
PROCESS RESILIENCE By Ravalika Pola. outline: Process Resilience  Design Issues  Failure Masking and Replication  Agreement in Faulty Systems  Failure.
Implementing Remote Procedure Call Landon Cox February 12, 2016.
Problem On a regular basis we use: –Java applets –JavaScript –ActiveX –Shockwave Notion of ubiquitous computing.
Presentation 3: Designing Distributed Objects. Ingeniørhøjskolen i Århus Slide 2 af 16 Outline Assumed students are knowledgeable about OOP principles.
Distributed Computing & Embedded Systems Chapter 4: Remote Method Invocation Dr. Umair Ali Khan.
Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC.
Lecture 5: RPC (exercises/questions). 26-Jun-16COMP28112 Lecture 52 First Six Steps of RPC TvS: Figure 4-7.
Topic 4: Distributed Objects Dr. Ayman Srour Faculty of Applied Engineering and Urban Planning University of Palestine.
Distributed Web Systems Distributed Objects and Remote Method Invocation Lecturer Department University.
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
Types for Programs and Proofs
03 – Remote invoaction Request-reply RPC RMI Coulouris 5
Prof. Leonardo Mostarda University of Camerino
Programming Models for Distributed Application
Lecture 4: RPC Remote Procedure Call Coulouris et al: Chapter 5
Lecture 4: RPC Remote Procedure Call CDK: Chapter 5
Distribution Infrastructures
Lecture 6: RPC (exercises/questions)
CS2013 Lecture 7 John Hurley Cal State LA.
Lecture 6: RPC (exercises/questions)
Lecture 7: RPC (exercises/questions)
Presentation transcript:

CMPT 401 Summer 2007 Dr. Alexandra Fedorova Lecture XVIII: Concluding Remarks

2 CMPT 401 Summer 2007 © A. Fedorova Outline Discuss A Note on Distributed Computing by Jim Waldo et al. Jim Waldo: –Distinguished Engineer at Sun Microsystems –Chief architect of Jini –Adjunct professor at Harvard

3 CMPT 401 Summer 2007 © A. Fedorova A Note on Distributed Computing Distributed computing is fundamentally different from local computing The two paradigms are so different that it would be very inefficient to try and make them look the same –You’d end up with distributed applications that aren’t robust to failures –Or with local applications that are more complex than they need to be Most programming environments for DS attempt to mask the difference between local and remote invocation –But this is not what’s hard about distributed computing…

4 CMPT 401 Summer 2007 © A. Fedorova Key Argument Achieving interface transparency in distributed systems is unreasonable –Distributed systems have different failure modes than local systems –Handling those failures properly requires a certain interface –Therefore, distributed systems must be accessed via different interfaces –Those interfaces would be an overkill for local systems

5 CMPT 401 Summer 2007 © A. Fedorova Differences Between Local and Distributed Applications Latency Memory access Partial failure and concurrency

6 CMPT 401 Summer 2007 © A. Fedorova Latency A remote method call takes longer to execute than a local method call If you build your application without taking this into account, you are doomed to have performance problems Suppose you disregard local/remote differences: –You build/test your application using local objects –You decide later which objects are local and which are remote –You find out that if frequently accessed objects are remote, your performance sucks

7 CMPT 401 Summer 2007 © A. Fedorova Latency (cont.) One way to overcome the latency problem: –Make available tools that will allow developer to debug performance –Understand what components are slowing down the system –Make recommendations about the components that should be local But can we be sure that such tools would be available? (Do you know of a good one?) This is an active research area – this means that this is hard!

8 CMPT 401 Summer 2007 © A. Fedorova Memory Access A local pointer does not make sense in a remote address space What are the solutions? –Create a language where all memory access is managed by a runtime system (i.e., Java) – everything is a reference But not everyone uses Java –Force the programmer to access memory in a way that does not use pointers (in C++ you can do both) But not all programmers are well behaved

9 CMPT 401 Summer 2007 © A. Fedorova Memory Access and Latency: The Verdict Conceptually, it is possible to mask the difference between local and distributed computing w.r.t. memory access and latency Latency: –Develop your application without consideration for object locations –Decide on object locations later –Rely on good debugging tools to determine the right location Memory access –Enforce memory access though the underlying management system But masking this difference is difficult, and so it’s not clear whether we can realistically expect it to be masked

10 CMPT 401 Summer 2007 © A. Fedorova Partial Failure One component has failed others keep operating You don’t know how much of the computation has actually completed – this is unique to distributed systems –Has the server failed or is it just slow? –Did it update my bank account before it failed? With local computing, a function can also fail, or a system may block or deadlock, but –You can always find out what’s happening by asking the operating system or the application –In distributed computing, you cannot always find out what happened, because you may be unable communicate with the entity in question

11 CMPT 401 Summer 2007 © A. Fedorova Concurrency Aren’t local multithreaded applications subject to same issues as distributed applications? Not quite: –In local programming, a programmer can always force a certain order of operations –In distributed computing this cannot be done –In local programming, the underlying system provides synchronization primitives and mechanisms –In distributed systems, this is not easily available, and the system providing the synchronization infrastructure may fail

12 CMPT 401 Summer 2007 © A. Fedorova So What Do We Do? Design the right interfaces Interfaces must allow the programmer to handle errors that are unique to distributed systems For example: a read() system call: –Local interface: int read(int fd, char *buf, int size) –Remote interface: int read(int fd, char *buf, int size, long timeout) Error codes are expanded to indicate timeout or network failure

13 CMPT 401 Summer 2007 © A. Fedorova But Wait… Can’t You Unify Interfaces Can’t you use the beefed-up remote interface even when programming local applications? Then you don’t need to have different sets of interfaces You could, but –Local programming would become a nightmare –This defeats the purpose of unifying local and distributed paradigms: instead of making distributed programming simpler you’d be making local programming more complex

14 CMPT 401 Summer 2007 © A. Fedorova So What Does Jim Suggest? Design objects with local interfaces Add an extension to the interface if the object is to be distributed The programmer will be aware of the object’s location How is this actually done? Recall RMI: –A remote object must implement Remote interface –A method invoked on a remote object must catch Remote exception –But the same object can be used locally, without specifying that it implements Remote

15 CMPT 401 Summer 2007 © A. Fedorova Summary Distributed computing is fundamentally different from local computing because of different failure modes By making distributed interfaces look like local interfaces, we are diminishing our ability to properly handle those failures – this results in brittle applications To handle those failures properly, interfaces must be designed in a certain way Therefore, remote interfaces must be different from local interfaces (unless you want to make local interfaces unnecessarily complicated)