Coordination Models and Languages

Slides:



Advertisements
Similar presentations
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Advertisements

JavaSpaces and TSpaces Theresa Tamash CDA 5937 November 4, 2002.
Linda Coordination Language Presented by Tae Ki Kim
Using DSVM to Implement a Distributed File System Ramon Lawrence Dept. of Computer Science
Concurrency CS 510: Programming Languages David Walker.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Tuple Spaces and JavaSpaces CS 614 Bill McCloskey.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Linda: A Data-space Approach to Parallel Programming CSE60771 – Distributed Systems David Moore.
CS 584 Lecture 16 n Assignment -- Due Friday n C* program n Paper reviews.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
490dp Prelude I: Group Membership Prelude II: Team Presentations Tuples Robert Grimm.
Fundamentals of Python: From First Programs Through Data Structures
1 Chapter 9 Spaces with LINDA. 2 Linda Linda is an experimental programming concept unlike ADA or Occam which are fully developed production-quality languages.
Object Oriented Software Development
TupleSpaces Revisited: Linda to TSpaces Ben Y. Zhao 13 July, 1998 UC Berkeley Computer Science Division.
DNA REASSEMBLY Using Javaspace Sung-Ho Maeung Laura Neureuter.
Chapter 8-3 : Distributed Systems Distributed systems Distributed systems Document-based middleware Document-based middleware Coordination-based middleware.
Jan. 3, 2001Parallel Processing1 Parallel Processing (CS 730) Lecture 1: Introduction to Parallel Programming with Linda * Jeremy R. Johnson Wed. Jan.
Parallel Processing1 Parallel Processing (CS 676) Lecture 2: Designing and Implementing Parallel Programs with Linda * Jeremy R. Johnson *This lecture.
CS 584 Lecture 14 n Assignment –Program due now –Check the web »Create program model & do calculations »Additional calculation problems n Paper presentations.
Advanced Computer Networks Topic 2: Characterization of Distributed Systems.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
Coordination Models and Languages Part I: Coordination Languages and Linda Part II: Technologies based on Tuple space concept beyond Linda Part III: Comparison.
Distribution and components. 2 What is the problem? Enterprise computing is Large scale & complex: It supports large scale and complex organisations Spanning.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Network Objects Marco F. Duarte COMP 520: Distributed Systems September 14, 2004.
Implementing Remote Procedure Calls Andrew D. Birrell and Bruce Jay Nelson Xerox Palo Alto Research Center Published: ACM Transactions on Computer Systems,
Concurrent Object-Oriented Programming Languages Chris Tomlinson Mark Scheevel.
Sadegh Aliakbary Sharif University of Technology Fall 2010.
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
EEC 688/788 Secure and Dependable Computing Lecture 10 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Pervasive Computing MIT SMA 5508 Spring 2006 Larry Rudolph 1 Publish & Subscribe Larry Rudolph May 3, 2006 SMA 5508 & MIT
Distributed Coordination-Based Systems
Programming Logic and Design Seventh Edition
Overview Parallel Processing Pipelining
Chapter 25 Domain Name System.
Duncan MacMichael & Galen Deal CSS 534 – Autumn 2016
Chapter 3: Process Concept
03 – Remote invoaction Request-reply RPC RMI Coulouris 5
“Language Mechanism for Synchronization”
Operating Systems (CS 340 D)
GC211Data Structure Lecture2 Sara Alhajjam.
CS533 Concepts of Operating Systems
Computer Engg, IIT(BHU)
Distribution and components
CHAPTER 3 Architectures for Distributed Systems
#01 Client/Server Computing
Monitors Chapter 7.
Transactional Memory Semaphores, monitors, and conditional critical regions all suffer from limitations based on lock semantics Naïve synchronization may.
Mobile Agents.
Module 2: Computer-System Structures
Object Oriented Programming
Multiple Processor Systems
GENERAL VIEW OF KRATOS MULTIPHYSICS
Outline Distributed Mutual Exclusion Introduction Performance measures
September 4, 1997 Parallel Processing (CS 730) Lecture 1: Introduction to Parallel Programming with Linda* Jeremy R. Johnson Wed. Jan. 3, 2001 *This.
Monitors Chapter 7.
Concurrency: Mutual Exclusion and Process Synchronization
The Structure of the “The” –Multiprogramming System
Module 2: Computer-System Structures
Monitors Chapter 7.
Course Overview PART I: overview material PART II: inside a compiler
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
Module 2: Computer-System Structures
Prebared by Omid Mustefa.
#01 Client/Server Computing
Presentation transcript:

Coordination Models and Languages Part I: Coordination Languages and Linda Part II: Technologies based on Tuple space concept beyond Linda Part III: Comparison of coordination languages. Conclusions.

Coordination Languages and Linda Natalya Abraham CS854

Problem In concurrent and distributed programming: We prefer to look at each component as a black box. We need a mechanism for controlling interactions between the components.

Candidate technologies: Concurrent OOP Concurrent logic programming Functional programming

Concurrent Object Oriented Programming: Limitations Based on message-passing The process structure - the number of processes and their relationships - determines the program structure. Concurrent Logic Programming: Limitations Too policy-laden and inflexible to serve as a good basis for most parallel programs. Functional Programming: Limitations Fail to provide the expressivity we need Gains nothing in terms of machine independence

The way out - Coordination “Coordination is managing dependencies between activities.” (From Coordination Theory) “Coordination is the process of building programs by gluing together active pieces.” (Carriero and Gelernter, Linda creators)

Programming = Computation + Coordination (Carriero and Gelernter, Linda creators)

In programming coordination is expressed by: Coordination models Triple (E, L, M) E – entities being coordinated (agents, processes, tuples) L – media used to coordinate the entities (channels, shared variables, tuple spaces) M – semantic framework of the model (guards, synchr. constraints) Coordination languages “The Linguistic embodiment of a coordination model”

Coordination Languages must support: Separation between computation and coordination. Distribution (decentralization) Dynamics(no need to link the activities before runtime) Interoperability Encapsulation (implementation details are hidden from other components)

Leading coordination models and languages Tuple Space Model (coordination through Shared Data Space) Linda, TSpaces, JavaSpaces, Jini IWIM (coordination through channels) Manifold

Tuple spaces and Linda Based on generative communication: Processes communicate by inserting or retrieving data objects (called Tuples) from shared Data Space ( called Tuple space) Interoperability is achieved: Linda places no constraints on the underlying language or platform

Tuple Space Concept P Out() P In/Rd() Eval() P Tuple Tuple Space

Tuple space operations Tuple is a series of typed fields. Example: (“a string”, 15.01, 17) (10) Operations out (t) insert a tuple t into the Tuple space (non-blocking) in(t) find and remove a “matching” tuple from the tuple space; block until a matching tuple is found rd(t) like in(t) except that the tuple is not removed eval(t) add the active tuple t to the tuple space The model is structurally recursive: a tuple space can be one field of a tuple.

Tuple matching Let t(i) denote the ith field in the tuple t. A tuple t given in a in(t) or rd(t) operation “matches” a tuple t’ in the tuple space iff: 1. t and t’ have the same number of fields, and 2. for each field if t(i) is a value then t(i) = t’(i) or if t(i) is of the form ?x then t’(i) is a valid value for the type of variable x If more than one tuple in the tuple space matches, then one is selected nondeterministically. As a result of tuple matching if t(i) is of the form ?x, then x := t’(i)

Example of Tuple matching N-element vector stored as n tuples in the Tuple space: (“V”, 1, FirstElt) (“V”, 2, SecondElt) … (“V”, n, NthElt) To read the jth element and assign it to x, use rd ( “V”, j, ?x ) To change the ith element, use in (“V”, i, ?OldVal) // Note: It is impossible to update a tuple without out (“V”, I, NewVal); // delete/add

Example 1: Sending and Receiving messages

Sending and Receiving messages (Cont.) Manifold: p.out-> q.in Linda: channel (p, out, q, in) { while (1) in (p, out, Index, Data) out (p, in, Index, Data); }

Sending and Receiving messages (Cont.) Message-passing model contains tree separate parts: Tag (message Id) Target Communication In Linda, Tuple is a set of fields, any of which may be a tag (message Id) or target (address) or communication (data).

Sending and Receiving messages (Cont.) In message-passing model a process and a message are separate structures. In Linda, a process becomes a tuple.

Example 2: Semaphores Semaphores Initialize: out(“sem”) wait or P-operation: in(“sem”) signal or V-operation: out(“sem”) For counting semaphore execute out(“sem”) n times

Example 3: Loop Turn a conventional loop into a parallel loop – all instances of <something> execute simultaneously. for <loop control> <something> In Linda: for <loop control> // creates many processes to execute eval(“this loop”, something( )); // the loop in parallel for <loop control> // Forces all executions to be complete in(“this loop”, 1); // before proceeding Where something( ) – function, returns 1

Example 4: Masters and Workers Master divides work into discrete tasks and puts into global space (tuple space) Workers retrieve tasks, execute, and put results back into global space Workers notified of work completion by having met some condition Master gatherers results from global space

Masters and Workers (Cont.) for all tasks { …/* build task structure*/ out (“task”, task_structure); } in (“result”, ? &task_id, ? &result_structure); } } worker() { while (inp (“task”, ? &task_structure) { …/* exec task */ out (“result”, task_id, result_structure); }

Example 5: Barrier Synchronization Each process within some group must wait at a barrier until all processes in the group have reached the barrier; then all can proceed. Set up barrier: out (“barrier”, n); Each process does the following: in(“barrier”, ? val); out(“barrier”, val-1); rd(“barrier”, 0).

Example 6: Dining Philosophers Problem phil(i) int i; { while (1) { think(); in (“room ticket”); in (“chopstick”, i) in (“chopstick”, (i+1)% Num); eat (); out (“chopstick”, i); out(“chopstick”, (i+1)%Num); out (“room ticket”); } initialize() { int i; for (i=0; i< Num; i++){ out (“chopstick”, i); eval ( phil(i) ); if (i< (Num-1)) out(“room ticket”); }

Example 7: Client-Server { int index = 1; … while (1){ in (“request”, index, ? req) out (“response”, index++, response); } client() { int index; … in (“server index”,? index); out (“server index”, index +1); out (“server”, index, request); in (“response”, index, ? response); }

Linda achieves: Nondetermenism Structured naming. Similar to “select” in relational DB, pattern matching in AI. in(P, i:integer, j:boolean) in(P, 2, j:boolean) Time uncoupling (Communication between time-disjoint processes) Distributed sharing (Variables shared between disjoint processes)

Linda: Limitations The language is although intuitive, but minimal Security (no encapsulation) New problems: How is the tuple space to be implemented in the absence of shared memory? Linda is not fault-tolerant - Processes are assumed not to fail - If process fails, it tuples can be lost - Reading operation is bloking, without timeout