Coordination Models and Languages Part I: Coordination Languages and Linda Part II: Technologies based on Tuple space concept beyond Linda Part III: Comparison.

Slides:



Advertisements
Similar presentations
Computer Architecture
Advertisements

Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
JavaSpaces and TSpaces Theresa Tamash CDA 5937 November 4, 2002.
Remote Procedure Call (RPC)
Software Connectors Software Architecture. Importance of Connectors Complex, distributed, multilingual, modern software system functionality and managing.
Linda Coordination Language Presented by Tae Ki Kim
Using DSVM to Implement a Distributed File System Ramon Lawrence Dept. of Computer Science
Monitors Chapter 7. The semaphore is a low-level primitive because it is unstructured. If we were to build a large system using semaphores alone, the.
An Associative Broadcast Based Coordination Model for Distributed Processes James C. Browne Kevin Kane Hongxia Tian Department of Computer Sciences The.
PZ13B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ13B - Client server computing Programming Language.
1 SWE Introduction to Software Engineering Lecture 23 – Architectural Design (Chapter 13)
Erlang concurrency. Where were we? Finished talking about sequential Erlang Left with two questions  retry – not an issue; I mis-read the statement in.
Network Objects Presenter: Dan Williams. Trends Network centric view of world Jini, Web Services Based on Object Oriented models Both papers contributed.
ECE669 L5: Grid Computations February 12, 2004 ECE 669 Parallel Computer Architecture Lecture 5 Grid Computations.
Concurrency CS 510: Programming Languages David Walker.
Programming Language Semantics Java Threads and Locks Informal Introduction The Java Specification Language Chapter 17.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
3.5 Interprocess Communication
Tuple Spaces and JavaSpaces CS 614 Bill McCloskey.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Linda: A Data-space Approach to Parallel Programming CSE60771 – Distributed Systems David Moore.
CS 584 Lecture 16 n Assignment -- Due Friday n C* program n Paper reviews.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
490dp Prelude I: Group Membership Prelude II: Team Presentations Tuples Robert Grimm.
Chapter 4.1 Interprocess Communication And Coordination By Shruti Poundarik.
C++ fundamentals.
Fundamentals of Python: From First Programs Through Data Structures
Advances in Language Design
1 Chapter 9 Spaces with LINDA. 2 Linda Linda is an experimental programming concept unlike ADA or Occam which are fully developed production-quality languages.
Object Oriented Software Development
TupleSpaces Revisited: Linda to TSpaces Ben Y. Zhao 13 July, 1998 UC Berkeley Computer Science Division.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
Inter-process Communication and Coordination Chaitanya Sambhara CSC 8320 Advanced Operating Systems.
Object-Oriented Implementation of Reconciliations M.Sc. Seminar Talk by Costa Shapiro under supervision of Prof. Shmuel Katz Computer Science – Technion.
DNA REASSEMBLY Using Javaspace Sung-Ho Maeung Laura Neureuter.
Chapter 8-3 : Distributed Systems Distributed systems Distributed systems Document-based middleware Document-based middleware Coordination-based middleware.
Jan. 3, 2001Parallel Processing1 Parallel Processing (CS 730) Lecture 1: Introduction to Parallel Programming with Linda * Jeremy R. Johnson Wed. Jan.
Parallel Processing1 Parallel Processing (CS 676) Lecture 2: Designing and Implementing Parallel Programs with Linda * Jeremy R. Johnson *This lecture.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
CS 584 Lecture 14 n Assignment –Program due now –Check the web »Create program model & do calculations »Additional calculation problems n Paper presentations.
Advanced Computer Networks Topic 2: Characterization of Distributed Systems.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
Lyra – A service-oriented and component-based method for the development of communicating systems (by Sari Leppänen, Nokia/NRC) Traditionally, the design,
Coordination Languages and their Significance Paper by: David Gelernter Nicholas Carriero Presented by: Varuna Iyengar.
Processes. Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication.
Distribution and components. 2 What is the problem? Enterprise computing is Large scale & complex: It supports large scale and complex organisations Spanning.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
1 Object Oriented Logic Programming as an Agent Building Infrastructure Oct 12, 2002 Copyright © 2002, Paul Tarau Paul Tarau University of North Texas.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
CSC 520 – Advanced Object Oriented Programming, Fall, 2010 Thursday, October 14 Week 7, UML Diagrams
Communicating Sequential Processes (CSP) Ali Saoud.
Network Objects Marco F. Duarte COMP 520: Distributed Systems September 14, 2004.
Implementing Remote Procedure Calls Andrew D. Birrell and Bruce Jay Nelson Xerox Palo Alto Research Center Published: ACM Transactions on Computer Systems,
Concurrent Object-Oriented Programming Languages Chris Tomlinson Mark Scheevel.
Distributed Systems1 Message Passing vs. Distributed Objects  The message-passing paradigm is a common model for distributed computing, in the sense that.
Parallel Computing Presented by Justin Reschke
Oct. 2, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 2: Designing and Implementing Parallel Programs with Linda * Jeremy R. Johnson *This.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 12.
“Language Mechanism for Synchronization”
Monitors Chapter 7.
Mobile Agents.
GENERAL VIEW OF KRATOS MULTIPHYSICS
September 4, 1997 Parallel Processing (CS 730) Lecture 1: Introduction to Parallel Programming with Linda* Jeremy R. Johnson Wed. Jan. 3, 2001 *This.
Monitors Chapter 7.
Coordination Models and Languages
Monitors Chapter 7.
Presentation transcript:

Coordination Models and Languages Part I: Coordination Languages and Linda Part II: Technologies based on Tuple space concept beyond Linda Part III: Comparison of coordination languages. Conclusions

Coordination Languages and Linda Natalya Abraham CS854

Problem In concurrent and distributed programming: We prefer to look at each component as a black box. We need a mechanism for controlling interactions between the components.

Candidate technologies: Concurrent OOP Concurrent logic programming Functional programming

Concurrent Object Oriented Programming: Limitations Based on message-passing The process structure - the number of processes and their relationships - determines the program structure. Concurrent Logic Programming: Limitations Too policy-laden and inflexible to serve as a good basis for most parallel programs. Functional Programming: Limitations Fail to provide the expressivity we need Gains nothing in terms of machine independence

The way out - Coordination “Coordination is managing dependencies between activities.” (From Coordination Theory) “Coordination is the process of building programs by gluing together active pieces.” (Carriero and Gelernter, Linda creators)

Programming = Computation + Coordination (Carriero and Gelernter, Linda creators)

In programming coordination is expressed by: Coordination models Triple (E, L, M) E – entities being coordinated (agents, processes, tuples) L – media used to coordinate the entities (channels, shared variables, tuple spaces) M – semantic framework of the model (guards, synchr. constraints) Coordination languages “The Linguistic embodiment of a coordination model”

Coordination Languages must support: Separation between computation and coordination. Distribution (decentralization) Dynamics (no need to link the activities before runtime) Interoperability Encapsulation (implementation details are hidden from other components)

Leading coordination models and languages Tuple Space Model (coordination through Shared Data Space) Linda, TSpaces, JavaSpaces, Jini IWIM (coordination through channels) Manifold

Tuple spaces and Linda Based on generative communication: Processes communicate by inserting or retrieving data objects (called Tuples) from shared Data Space ( called Tuple space) Interoperability is achieved: Linda places no constraints on the underlying language or platform

Tuple Space Concept P P P Eval() Out() In/Rd() Tuple Tuple Space

Tuple space operations Tuple is a series of typed fields. Example: (“a string”, 15.01, 17) (10) Operations out (t) insert a tuple t into the Tuple space (non-blocking) in(t) find and remove a “matching” tuple from the tuple space; block until a matching tuple is found rd(t) like in(t) except that the tuple is not removed eval(t) add the active tuple t to the tuple space The model is structurally recursive: a tuple space can be one field of a tuple.

Tuple matching Let t(i) denote the ith field in the tuple t. A tuple t given in a in(t) or rd(t) operation “matches” a tuple t’ in the tuple space iff: 1. t and t’ have the same number of fields, and 2. for each field if t(i) is a value then t(i) = t’(i) or if t(i) is of the form ?x then t’(i) is a valid value for the type of variable x If more than one tuple in the tuple space matches, then one is selected nondeterministically. As a result of tuple matching if t(i) is of the form ?x, then x := t’(i)

Example of Tuple matching N-element vector stored as n tuples in the Tuple space: (“V”, 1, FirstElt) (“V”, 2, SecondElt) … (“V”, n, NthElt) To read the jth element and assign it to x, use rd ( “V”, j, ?x ) To change the ith element, use in (“V”, i, ?OldVal) // Note: It is impossible to update a tuple without out (“V”, I, NewVal); // delete/add

Example 1: Sending and Receiving messages

Sending and Receiving messages (Cont.) Manifold: p.out-> q.in Linda: channel (p, out, q, in) { while (1) { in (p, out, Index, Data) out (p, in, Index, Data); }

Sending and Receiving messages (Cont.) Message-passing model contains tree separate parts: 1. Tag (message Id) 2. Target 3. Communication In Linda, Tuple is a set of fields, any of which may be a tag (message Id) or target (address) or communication (data).

Sending and Receiving messages (Cont.) In message-passing model a process and a message are separate structures. In Linda, a process becomes a tuple.

Example 2: Semaphores Semaphores Initialize: out(“sem”) wait or P-operation: in(“sem”) signal or V-operation: out(“sem”) For counting semaphore execute out(“sem”) n times

Example 3: Loop Turn a conventional loop into a parallel loop – all instances of execute simultaneously. for In Linda: for // creates many processes to execute eval(“this loop”, something( )); // the loop in parallel for // Forces all executions to be complete in(“this loop”, 1); // before proceeding Where something( ) – function, returns 1

Example 4: Masters and Workers Master divides work into discrete tasks and puts into global space (tuple space) Workers retrieve tasks, execute, and put results back into global space Workers notified of work completion by having met some condition Master gatherers results from global space

Masters and Workers (Cont.) master() { for all tasks { …/* build task structure*/ out (“task”, task_structure); } for all tasks { in (“result”, ? &task_id, ? &result_structure); } } worker() { while (inp (“task”, ? &task_structure) { …/* exec task */ out (“result”, task_id, result_structure); } }

Example 5: Barrier Synchronization Each process within some group must wait at a barrier until all processes in the group have reached the barrier; then all can proceed. Set up barrier: out (“barrier”, n); Each process does the following: in(“barrier”, ? val); out(“barrier”, val-1); rd(“barrier”, 0).

Example 6: Dining Philosophers Problem phil(i) int i; { while (1) { think(); in (“room ticket”); in (“chopstick”, i) in (“chopstick”, (i+1)% Num); eat (); out (“chopstick”, i); out(“chopstick”, (i+1)%Num); out (“room ticket”); } initialize() { int i; for (i=0; i< Num; i++){ out (“chopstick”, i); eval ( phil(i) ); if (i< (Num-1)) out(“room ticket”); }

Example 7: Client-Server server() { int index = 1; … while (1){ in (“request”, index, ? req) … out (“response”, index++, response); } client() { int index; … in (“server index”,? index); out (“server index”, index +1); … out (“server”, index, request); in (“response”, index, ? response); … }

Linda achieves: Nondeterminism Structured naming. Similar to “select” in relational DB, pattern matching in AI. in(P, i:integer, j:boolean) in(P, 2, j:boolean) Time uncoupling (Communication between time- disjoint processes) Distributed sharing (Variables shared between disjoint processes)

Linda: Limitations The language is although intuitive, but minimal Security (no encapsulation) New problems: How is the tuple space to be implemented in the absence of shared memory? Linda is not fault-tolerant - Processes are assumed not to fail - If process fails, it tuples can be lost - Reading operation is blocking, without timeout