Tuple Spaces and JavaSpaces CS 614 Bill McCloskey.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Optimistic Methods for Concurrency Control By : H.T. Kung & John T. Robinson Presenters: Munawer Saeed.
CS3771 Today: deadlock detection and election algorithms  Previous class Event ordering in distributed systems Various approaches for Mutual Exclusion.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
JavaSpaces and TSpaces Theresa Tamash CDA 5937 November 4, 2002.
Remote Procedure Call (RPC)
Study of Hurricane and Tornado Operating Systems By Shubhanan Bakre.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Introduction in algorithms and applications Introduction in algorithms and applications Parallel machines and architectures Parallel machines and architectures.
Distributed Systems 2006 Styles of Client/Server Computing.
Group Communications Group communication: one source process sending a message to a group of processes: Destination is a group rather than a single process.
Network Objects Presenter: Dan Williams. Trends Network centric view of world Jini, Web Services Based on Object Oriented models Both papers contributed.
Concurrency CS 510: Programming Languages David Walker.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Linda: A Data-space Approach to Parallel Programming CSE60771 – Distributed Systems David Moore.
Language Support for Lightweight transactions Tim Harris & Keir Fraser Presented by Narayanan Sundaram 04/28/2008.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
490dp Prelude I: Group Membership Prelude II: Team Presentations Tuples Robert Grimm.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
State Machines CS 614 Thursday, Feb 21, 2002 Bill McCloskey.
CS252: Systems Programming Ninghui Li Final Exam Review.
1 Chapter 9 Spaces with LINDA. 2 Linda Linda is an experimental programming concept unlike ADA or Occam which are fully developed production-quality languages.
TupleSpaces Revisited: Linda to TSpaces Ben Y. Zhao 13 July, 1998 UC Berkeley Computer Science Division.
Ioana BurceaWon-Ho Park Electrical and Computer Engineering Department University of Toronto Algorithms for Implementation of Tuple Space Expert Topic.
Chapter 8-3 : Distributed Systems Distributed systems Distributed systems Document-based middleware Document-based middleware Coordination-based middleware.
JavaSpaces TM By Stephan Roorda Source: JavaSpaces specification.
Reliable Communication in the Presence of Failures Based on the paper by: Kenneth Birman and Thomas A. Joseph Cesar Talledo COEN 317 Fall 05.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
1 File Management Chapter File Management n File management system consists of system utility programs that run as privileged applications n Concerned.
Coordination Models and Languages Part I: Coordination Languages and Linda Part II: Technologies based on Tuple space concept beyond Linda Part III: Comparison.
Ronny Krashinsky Erik Machnicki Software Cache Coherent Shared Memory under Split-C.
JINI Coordination-Based System By Anthony Friel * David Kiernan * Jasper Wood.
Implementation and Performance of Munin (Distributed Shared Memory System) Dongying Li Department of Electrical and Computer Engineering University of.
DISTRIBUTED COMPUTING
CS4432: Database Systems II Query Processing- Part 2.
Introduction.  Administration  Simple DBMS  CMPT 454 Topics John Edgar2.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Thread basics. A computer process Every time a program is executed a process is created It is managed via a data structure that keeps all things memory.
Replication Improves reliability Improves availability ( What good is a reliable system if it is not available?) Replication must be transparent and create.
David Evans CS655: Programming Languages University of Virginia Computer Science Lecture 22: Abstractions for Concurrency.
CS 540 Database Management Systems
Distributed Algorithms Dr. Samir Tartir Extracted from Principles of Concurrent and Distributed Programming, Second Edition By M. Ben-Ari.
1 5-High-Performance Embedded Systems using Concurrent Process (cont.)
Agenda  Quick Review  Finish Introduction  Java Threads.
System Models Advanced Operating Systems Nael Abu-halaweh.
Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC.
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
CS 540 Database Management Systems
Java Distributed Object System
Module 11: File Structure
Background on the need for Synchronization
Isolation Levels Understanding Transaction Temper Tantrums
Ivy Eva Wu.
Chapter 5: Process Synchronization
CMSC 611: Advanced Computer Architecture
Concurrency without Locks
DISTRIBUTED COMPUTING
7.1. CONSISTENCY AND REPLICATION INTRODUCTION
Replication Improves reliability Improves availability
Threading And Parallel Programming Constructs
Shared Memory Programming
Distributed Algorithms
Fast Communication and User Level Parallelism
Concurrency: Mutual Exclusion and Process Synchronization
Subject : T0152 – Programming Language Concept
JINI ICS 243F- Distributed Systems Middleware, Spring 2001
Coordination Models and Languages
Lecture 24: Multiprocessors
Lecture 18: Coherence and Synchronization
Presentation transcript:

Tuple Spaces and JavaSpaces CS 614 Bill McCloskey

Tuple Spaces A flexible technique for parallel and distributed computing Similar to message passing Data exists in a “tuple space” All processors can access the space

View of a Tuple Space Tuple Space (req, P, 7) (rsp, Q, 8.1) (A, 77) (B) Process 1 Out(B) Out(rsp, Q, 8.1) Process 2 In(X, 77) Process 3 Tuple t is inserted into TS using Out(t) A tuple t is removed from tuple space using In(t)

Tuple Types: Simple Case A tuple is inserted into TS using Out(P, x, y, z) (assume x, y, z integers) The tuple is removed from TS using In(P, a:integer, b:integer, c:integer) The result: a=x, b=y, c=z P is the name of the tuple Also have Read, similar to In, but tuple is not removed from TS

Formal vs. Actual Parameters Parameters of the form “p:t” are formal parameters Other parameters are actual parameters In and Out accept both formal and actual parameters Op(Req, 77.2, i:integer, true, s:string)

Structured Naming An actual parameter to In forms part of the name of the tuple to be found Formal parameters are filled with the other values from the tuple Example: In(P, 2, j:boolean, 77.1) requests a tuple with structured name “P,2,,77.1”

Structured Naming Out may also have formal parameters Example: The call Out(A, 4, j:integer) is made A call of In(A, i:integer, 77) finds this tuple and sets i=4 A call of In(A, i:integer, 88) also finds it Formal parameters to Out may never be matched with formal parameters to In! The scope of the parameter is restricted to the Out call itself

Concurrency If multiple tuples are available to an In call, one is selected nondeterministically If nothing is available, In blocks Tuples operations are atomic A simple shared variable update: In(Var, value:integer) Out(Var, new_value)

Properties A little like message passing, but Messages can stay alive after receipt Messages aren’t directed to a certain party A little like shared memory, but Structured Operations are atomic Space uncoupling Time uncoupling

Active Monitors Similar to a monitor, but operations are run in a separate process Waits for tuples to appear with commands for the monitor to run Each command is run atomically Basically, just doing message passing Messages are buffered up in TS during processing

Locking Useful primitives are easy to write A mutex: Lock: In(L) Unlock: Out(L) A semaphore: Initialize the semaphore to n by writing n tuples Decrement by taking one of the tuple

Distributed Naming Tuples not addressed to a certain node Could have a cluster of processes accepting requests A tuple (Request, …) is served by the first available process in the cluster No need for a dispatcher Tuple names can be used for distributed addressing

Distributed Naming Server Client (Request, … ReqInfo …)

Continuation Passing A programming model Data flows through tuple space from one process to another Process writes a tuple, then blocks on a “reply” tuple Reply tuple determines the next action

Continuation Passing A (Q, …) B A B A (R, …) B A B A decides what to do based on the reply R. This is like a continuation which is being passed to A.

Implementation Linda implementations can be slow A tuple is usually stored on one processor, its “home” Other processors broadcast queries for locations of tuples Also could use a hash function Having a single home guarantees atomicity

Linda A concurrent programming language Uses tuple spaces Tuple spaces are more than an API They’re linked to a programming style Linda makes it efficient to program using this style

Ordering In and Read are nondeterministic In some sense, a total ordering is not guaranteed Example: Clients write command tuples into TS. Replicated servers execute the commands. Each server may read the commands in a different order. Result: Cannot use TSs for agreement

Problems Linda is not fault-tolerant Processors are assumed not to fail If processor fails, its tuples can be lost At worst, entire system can fail If tuples are replicated, you run into standard agreement problems Linda offers no security

JavaSpaces A technology from Sun to add distributed computing to Jini “Ever since I first saw David Gelernter's Linda programming language almost twenty years ago, I felt that the basic ideas of Linda could be used to make an important advance in the ease of distributed and parallel programming.” — Bill Joy

JavaSpaces Overview Java objects are stored in a JavaSpace Operations: Write: Works like Out Read: Works like Read Take: Works like In Notify: Notifies an object when a matching entry is added to the space

Typing Objects in the space have Java types Write copies an object into the space Read/Take/Notify(obj) look for a obj’ in the space satisfying: type(obj’)  type(obj) (subtype relation) obj.field  null  obj.field = obj’.field Fields that are only part of obj’ (due to subtyping) are not constrained

Timeouts and Liveness Timeout improves liveness Operations like In and Read can timeout Objects added to the space are removed after their “lease” expires Timeout can prevent deadlock Leasing functions like garbage collection here

Transactions Operations can be bundled into atomic transactions Ongoing transactions don’t affect each other Observers see transactions as occurring sequentially But different observers may see different orders of transactions

Reliability JavaSpaces can be implemented in different ways Specification doesn’t require reliability Transactions preserve consistency when processes fail If the entire space fails, recovery is up to the implementation

Conclusions Tuple spaces provide a very simple model for distributed computing Fault-tolerance is hard to get right Distributed naming impairs security Multiple JavaSpaces? Inefficient