Communication Issues.

Slides:



Advertisements
Similar presentations
DISTRIBUTED COMPUTING PARADIGMS
Advertisements

Processes Management.
IPC (Interprocess Communication)
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Fall 2003 URL: Distributed System Architectures.
Deadlocks, Message Passing Brief refresh from last week Tore Larsen Oct
Remote Procedure Call (RPC)
©2009 Operačné systémy Procesy. 3.2 ©2009 Operačné systémy Process in Memory.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Remote Procedure CallCS-4513, D-Term Remote Procedure Call CS-4513 Distributed Computing Systems (Slides include materials from Operating System.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Introduction in algorithms and applications Introduction in algorithms and applications Parallel machines and architectures Parallel machines and architectures.
Slides for Chapter 6: Operating System support From Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edition 3, © Addison-Wesley.
Distributed RT Systems ITV Real-Time Systems Anders P. Ravn Aalborg University February 2006.
Concurrency CS 510: Programming Languages David Walker.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Learning Objectives Understanding the difference between processes and threads. Understanding process migration and load distribution. Understanding Process.
Centralized Architectures
Computer Science Lecture 2, page 1 CS677: Distributed OS Last Class: Introduction Distributed Systems – A collection of independent computers that appears.
02/01/2010CSCI 315 Operating Systems Design1 Interprocess Communication Notice: The slides for this lecture have been largely based on those accompanying.
Exercises for Chapter 6: Operating System Support
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Chapter 4.1 Interprocess Communication And Coordination By Shruti Poundarik.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Chapter 9 Message Passing Copyright © Operating Systems, by Dhananjay Dhamdhere Copyright © Operating Systems, by Dhananjay Dhamdhere2 Introduction.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Inter-process Communication and Coordination Chaitanya Sambhara CSC 8320 Advanced Operating Systems.
DCE (distributed computing environment) DCE (distributed computing environment)
Pablo de la Fuente Departamento de Informática E.T.S. de Ingeniería Informática Campus Miguel Delibes Valladolid Spain Advanced.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Introduction to Distributed Systems Slides for CSCI 3171 Lectures E. W. Grundke.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
DISTRIBUTED COMPUTING PARADIGMS. Paradigm? A MODEL 2for notes
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
1 Lecture 5 (part2) : “Interprocess communication” n reasons for process cooperation n types of message passing n direct and indirect message passing n.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved RPC Tanenbaum.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CE Operating Systems Lecture 13 Linux/Unix interprocess communication.
Chapter 5: Distributed objects and remote invocation Introduction Remote procedure call Events and notifications.
Distributed Programming Concepts and Notations. Inter-process Communication Synchronous Messages Asynchronous Messages Select statement Remote procedure.
Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy. Presented by: Tim Fleck.
Department of Computer Science and Software Engineering
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Threads.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Stacey Levine Chapter 4.1 Message Passing Communication.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Network Architecture Protocol hierarchies Design Issues for the layers
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
 Process Concept  Process Scheduling  Operations on Processes  Cooperating Processes  Interprocess Communication  Communication in Client-Server.
Real-Time Systems Lecture 5 Lärare: Olle Bowallius Telefon: Anders Västberg Telefon:
Distributed Algorithms Dr. Samir Tartir Extracted from Principles of Concurrent and Distributed Programming, Second Edition By M. Ben-Ari.
Mutual Exclusion Algorithms. Topics r Defining mutual exclusion r A centralized approach r A distributed approach r An approach assuming an organization.
Channels. Models for Communications Synchronous communications – E.g. Telephone call Asynchronous communications – E.g. .
Gokul Kishan CS8 1 Inter-Process Communication (IPC)
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Last Class: Introduction
Last Class: RPCs and RMI
Distributed OS.
Subject Name: OPERATING SYSTEMS Subject Code: 10EC65
Programming Models for Distributed Application
Outline Communication Primitives - continued Theoretical Foundations
Channels.
Channels.
Channels.
Presentation transcript:

Communication Issues

Synchronization and Communication The correct behaviour of a concurrent program depends on synchronization and communication between its processes. Synchronization: the satisfaction of constraints on the interleaving of the actions of processes (e.g. an action by one process only occurring after an action by another). Communication: the passing of information from one process to another. Concepts are linked since communication requires synchronization, and synchronization can be considered as contentless communication. Data communication is usually based upon either shared variables or message passing.

Communication primitives Communication primitives are the high level construct with which programs use the underlying communication network They play a significant role in the effective usage of distributed systems The communication primitives influence a programmers choice of algorithms Performance of the programs

Basic primitives Send Receive Send has two parameters Message and a destination Receive has two parameters A source and a buffer

Buffered Message Passing From user buffer to kernel buffer From the kernel on the sending computer to the kernel buffer on the receiving computer Finally from the buffer on the receiving computer to a user buffer

Non blocking primitives The send primitive returns the control to the user process as soon as the message Is copied from the user buffer onto the kernel buffer The corresponding receive primitive signals its intention to receive a message and provides a buffer to copy the message The receiving process may either periodically check for the arrival of a message or to be signaled by the kernel upon arrival of a message

Advantage of Non blocking primitives

Advantage of Non blocking primitives Programs have maximum flexibility to perform computation and communication in any order they want

Disadvantage of Non blocking primitives

Disadvantage of Non blocking primitives Programming is tricky and difficult Programs may become time-dependent where problems( or system states) are irreproducible, making the programs very difficult to debug

Unbuffered Message Passing From one user buffer to another user buffer directly Program using send should avoid reusing the buffer until the message has been transmitted For large systems a combination of unbuffered and non blocking semantics allows almost complete overlap between the communication and the on going computational activity in the user program

Non blocking Message Passing A natural use of non blocking communication occurs in producer-consumer relationships The consumer process can issue a non blocking receive If a message is present, the consumer process reads it, otherwise it performs some other computation. The producer can issue non blocking sends If a send fails for any reason (e.g. the buffer is full), it can be retried later

Blocking primitives The send primitive does not return the control to the user program until the message has been sent ( an unreliable blocking primitive) or an acknowledgement has been received ( a reliable blocking primitive) In both case, user buffer can be reused as soon as the control is returned to the user program The corresponding receive primitive does not return control until a message is copied to the user buffer A reliable receive primitive automatically sends acknowledgement An unreliable receive primitive does not send an acknowledgement

Advantage of blocking primitives

Advantage of blocking primitives Behavior of the program is predictable Programming is easy

Disadvantage of blocking primitives

Disadvantage of blocking primitives Lack of flexibility in programming The absence of concurrency between computation and communication

Synchronous primitive Send is blocked until a corresponding receive is primitive is executed at the receiver end This is also called rendezvous A blocking synchronous primitive can be extended to an unblocking synchronous primitive by first copying the message to a buffer at the sending side and then allowing the process to perform other computational activity except another send

How to convert a blocking Synchronous primitive to unblocking Synchronous primitive?

How to convert a blocking Synchronous primitive to unblocking Synchronous primitive? A blocking synchronous primitive can be extended to an unblocking synchronous primitive by first copying the message to a buffer at the sending side and then allowing the process to perform other computational activity except another send

Asynchronous primitive The messages are buffered with the asynchronous primitives A send primitive does not block even if there is no corresponding execution of a receive primitive The corresponding receive primitive can either be a blocking or a non blocking primitive

Disadvantage of Asynchronous primitive

Disadvantage of Asynchronous primitive One disadvantage of buffering messages is that it is more complex, as it involves creating , managing and destroying buffers Another issue is what to do with the messages that are meant for processes that have already died

Message-Based Communication and Synchronization Use of a single construct for both synchronization and communication Three issues: the model of synchronization the method of process naming the message structure Process P1 Process P2 receive message send message time time

Process Synchronization Variations in the process synchronization model arise from the semantics of the send operation Asynchronous (or no-wait) (e.g. POSIX) Requires buffer space. What happens when the buffer is full? Process P1 Process P2 send message message Send does not block n Messages are queued at the receiver n We refer to these queues as ports Communication can be many-to-one send(e,p): send value e to port p. Calling process not blocked v=receive(p): receive value into var v from port p. Calling process is blocked if no value queued to port. receive message time time

Process Synchronization Synchronous (e.g. CSP, occam2) No buffer space required Known as a rendezvous Process P1 Process P2 send message blocked M receive message time time n send(e,c): Send e to channel c. Sending process is blocked until channel received e n v=receive(c): Receive a value into a local variable v from channel c. The calling process is blocked until a message is sent into channel n No buffering

Process Synchronization Remote invocation (e.g. Ada) Known as an extended rendezvous Analogy: The posting of a letter is an asynchronous send A telephone is a better analogy for synchronous communication Process P1 Process P2 send message M receive message blocked reply time time

Asynchronous and Synchronous Sends

Asynchronous and Synchronous Sends Asynchronous communication can implement synchronous communication: P1 P2 asyn_send (M) wait (M) wait (ack) asyn_send (ack)

Asynchronous and Synchronous Sends Asynchronous communication can implement synchronous communication: P1 P2 asyn_send (M) wait (M) wait (ack) asyn_send (ack) Two synchronous communications can be used to construct a remote invocation: P1 P2 syn_send (message) wait (message) wait (reply) ... construct reply ... syn_send (reply)

Disadvantages of Asynchronous Send

Disadvantages of Asynchronous Send Potentially infinite buffers are needed to store unread messages Asynchronous communication is out-of-date; most sends are programmed to expect an acknowledgement More communications are needed with the asynchronous model, hence programs are more complex It is more difficult to prove the correctness of the complete system Where asynchronous communication is desired with synchronised message passing then buffer processes can easily be constructed; however, this is not without cost

Process Naming Two distinct sub-issues direction versus indirection symmetry With direct naming, the sender explicitly names the receiver: send <message> to <process-name> With indirect naming, the sender names an intermediate entity (e.g. a channel, mailbox, link or pipe): send <message> to <mailbox> With a mailbox, message passing can still be synchronous Direct naming has the advantage of simplicity, whilst indirect naming aids the decomposition of the software; a mailbox can be seen as an interface between parts of the program

Process Naming A naming scheme is symmetric if both sender and receiver name each other (directly or indirectly) send <message> to <process-name> wait <message> from <process-name> send <message> to <mailbox> wait <message> from <mailbox> It is asymmetric if the receiver names no specific source but accepts messages from any process (or mailbox) wait <message> Asymmetric naming fits the client-server paradigm With indirect the intermediary could have: a many-to-one structure – a many-to-many structure a one-to-one structure – a one-to-many

Message Structure A language usually allows any data object of any defined type (predefined or user) to be transmitted in a message Need to convert to a standard format for transmission across a network in a heterogeneous environment OS allow only arrays of bytes to be sent

Message Characteristics Principal Operations • send • receive Synchronization • Synchronous • Asynchronous • Rendevous Multiplicity • one-one • many-one • many-many • Selective Anonymity • anonymous message passing • non-anonymous message passing Receipt of Messages • Unconditional • Selective

Selective waiting So far, the receiver of a message must wait until the specified process, or mailbox, delivers the communication A receiver process may actually wish to wait for any one of a number of processes to call it Server processes receive request messages from a number of clients; the order in which the clients call being unknown to the servers To facilitate this common program structure, receiver processes are allowed to wait selectively for a number of possible messages

Non-determinism and Selective Waiting Concurrent languages make few assumptions about the execution order of processes A scheduler is assumed to schedule processes non-deterministically Consider a process P that will execute a selective wait construct upon which processes S and T could call

Non-determinism and Selective Waiting P runs first; it is blocked on the select. S (or T) then runs and rendezvous with P S (or T) runs, blocks on the call to P; P runs and executes the select; a rendezvous takes place with S (or T) S (or T) runs first and blocks on the call to P; T (or S) now runs and is also blocked on P. Finally P runs and executes the select on which T and S are waiting The three possible interleavings lead to P having none, one or two calls outstanding on the selective wait If P, S and T can execute in any order then, in latter case, P should be able to choose to rendezvous with S or T — it will not affect the programs correctness

Non-determinism and Selective Waiting A similar argument applies to any queue that a synchronisation primitive defines Non-deterministic scheduling implies all queues should release processes in a non-deterministic order Semaphore queues are often defined in this way; entry queues and monitor queues are specified to be FIFO The rationale here is that FIFO queues prohibit starvation but if the scheduler is non-deterministic then starvation can occur anyway!

POSIX Message Queues POSIX supports asynchronous, indirect message passing through the notion of message queues A message queue can have many readers and many writers Priority may be associated with the queue Intended for communication between processes (not threads) Message queues have attributes which indicate their maximum size, the size of each message, the number of messages currently queued etc. An attribute object is used to set the queue attributes when the queue is created

POSIX Message Queues Message queues are given a name when they are created To gain access to the queue, requires an mq_open name mq_open is used to both create and open an already existing queue (also mq_close and mq_unlink) Sending and receiving messages is done via mq_send and mq_receive Data is read/written from/to a character buffer. If the buffer is full or empty, the sending/receiving process is blocked unless the attribute O_NONBLOCK has been set for the queue (in which case an error return is given) If senders and receivers are waiting when a message queue becomes unblocked, it is not specified which one is woken up unless the priority scheduling option is specified

POSIX Message Queues A process can also indicate that a signal should be sent to it when an empty queue receives a message and there are no waiting receivers In this way, a process can continue executing whilst waiting for messages to arrive or one or more message queues It is also possible for a process to wait for a signal to arrive; this allows the equivalent of selective waiting to be implemented If the process is multi-threaded, each thread is considered to be a potential sender/receiver in its own right

Remote Procedure Call (RPC) Remote Procedure Call is a procedure P that caller process C gets server process S to execute as if C had executed P in C's own address space RPCs support distributed computing at higher level than sockets architecture/OS neutral passing of simple & complex data types common application needs like name resolution, security etc. caller process server process Receive request and start procedure execution Call procedure and wait for reply Procedure P executes Resume execution Send reply and wait for the next request

Remote Procedure Call (RPC) RPC Standards There are at least 3 widely used forms of RPC Open Network Computing version of RPC Distributed Computing Environment version of RPC Microsoft's COM/DCOM proprietary standard ONC RPC specified in RFC 1831 is widely available RPC standard on UNIX and other OSs has been developed by Sun Microsystems Distributed Computing Environment is widely available RPC standard on many OSs supports RPCs, directory of RPC servers & security services COM is proprietary, adaptive extension of DCE standard Java RMI

Event Ordering X,Y, Z and A on a mailing list X sends a message with the subject Meeting Y and Z reply by sending a, Message with subject Re: Meeting In real-time, X’s message was sent first Y reads and reply Z reads both X and Y’s reply and reply again But due to independent delay A might see Item from Subject 23 Z Re: Meeting 24 X Meeting 25 Y Re: Meeting

Event Ordering If the clocks of X, Y, Z’s computer synchronized, then each message would carry time on the local computer’s clock For example, messages m1, m2, and m3 and time t1, t2, and t3 where t1,<t2<t3 Since clocks can not be synchronized the perfectly logical clock was introduced by Lamport Logically a message was received after it was sent X sends m1 before Y receives m1; Y send m2 before X receives m2 Replies are sent after receiving messages Y receives m1 before sending m2 Logical time takes the idea further by assigning a number to each event corresponding to its logical ordering. Figure shows numbers 1 to 4 on the events at X and Y.

Real-time ordering of events Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000

The happened-before relation The happened-before relation (denoted by ) is defined as follows: Rule 1 : If a and b are events in the same process and a was executed before b, then a  b. Rule 2 : If a is the event of sending a message by one process and b is the event of receiving that message by another process, then a  b. Rule 3 : If a  b and b  c, then a  c. Relationship between two events. Two events a and b are causally related if a  b or b  a. Two distinct events a and b are said to be concurrent if a  b and b  a (denoted as a b).

A time-space representation A time-space view of a distributed system. Rule 1: a0  a1  a2  a3 b0  b1  b2  b3 c0  c1  c2  c3 Rule 2: a0  b3 b1  a3; b2  c1; b0  c2

System structure from logical point of view. States State model: A process executes three types of events: internal actions, send actions, and receive actions. Global state: A collection of local states and the state of all the communication channels. receive send Process Process Process Process System structure from logical point of view.

References George Coulouris, Jean Dollimore and Tim Kindberg. Distributed Systems: Concepts and Design (Edition 3 ). Addison-Wesley 2001 http://www.cdk3.net/ Andrew S. Tanenbaum, Maarten van Steen. Distributed Systems: Principles and Paradigms. Prentice-Hall 2002. http://www.cs.vu.nl/~ast/books/ds1/ P. K. Sinha, P.K. "Distributed Operating Systems, Concepts and Design", IEEE Press, 1993 Sape J. Mullender, editor. Distributed Systems, 2nd edition, ACM Press, 1993 http://wwwhome.cs.utwente.nl/~sape/gos0102.html Gregory R. Andrews. Foundations of Multithreaded, Parallel, and Distributed Programming, Addison Wesley, 2000 http://www.cs.arizona.edu/people/greg/ J. Magee & J. Kramer. Concurrency. State Models and Java Programs. John Wiley, 1999 http://www-dse.doc.ic.ac.uk/concurrency/