Presentation is loading. Please wait.

Presentation is loading. Please wait.

“Inter Process Communication and Coordination”

Similar presentations


Presentation on theme: "“Inter Process Communication and Coordination”"— Presentation transcript:

1 “Inter Process Communication and Coordination”
Poornima Institute of Engineering & Technology, Jaipur Department of Computer Engineering A Presentation on Presented by,: Manish Bhardwaj Assistant Professor “Inter Process Communication and Coordination” Lecture No-16 Subject Code -8 CS3 Subject Name- Ds

2 OUTLINE Introduction to Concurrent Programming Languages
Interprocess Communication Message Passing Communication Basic Communication Primitives Message Design Issues Synchronization and Buffering References

3 Concurrent Programming Languages
Supports Concurrency, synchronization and communication while interaction. Serious Issue - Run time problems may occur. Can be treated as extension to sequential language.

4 A Taxonomy

5 Introduction to Concurrent Programming Languages
Coordination languages OCCAM: based on CSP process model, use PAR, ALT, and SEQ constructors, use explicit global links for communication. • SR: based on resource (object) model, use synchronous CALL and asynchronous SEND and rendezvous IN, use capability for channel naming. • LINDA: based on distributed data structure model, use tuples to represent both process and object, use blocking IN and RD and non-blocking OUT for communication.

6 Comparison of OCCAM, SR and LINDA

7 INTERPROCESS COMMUNICATION
Processes executing concurrently in the operating system may be either independent or cooperating processes. Reasons for providing an environment that allows process cooperation. 1) Information Sharing Several users may be interested in the same piece of information. 2) Computational Speed up Process can be divided into sub tasks to run faster, speed up can be achieved if the computer has multiple processing elements. 3) Modularity Dividing the system functions into separate processes or threads. 4) Convenience Even an individual user may work on many tasks at the same time.

8 COMMUNICATION MODELS Cooperating processes require IPC mechanism that allow them to exchange data and information. Communication can take place either by Shared memory or Message passing Mechanisms. Shared Memory: 1) Processes can exchange information by reading and writing data to the shared region. 2) Faster than message passing as it can be done at memory speeds when within a computer. 3) System calls are responsible only to establish shared memory regions. Message Passing: Mechanism to allow processes to communicate and synchronize their actions without sharing the same address space and is particularly useful in distributed environment.

9 Message Passing Communication
Messages are collection of data objects and their structures Messages have a header containing system dependent control information and a message body that can be fixed or variable size. When a process interacts with another, two requirements have to be satisfied. Synchronization and Communication. Fixed Length Easy to implement Minimizes processing and storage overhead. Variable Length Requires dynamic memory allocation, so fragmentation could occur.

10 Fixed-length messages:
simple to implement - can have pool of standard-sized buffers low overheads and efficient for small lengths copying overheads if fixed length too long can be inconvenient for user processes with variable amount of data to pass may need a sequence of messages to pass all the data long messages may be better passed another way e.g. FTP copying probably involved, sometimes multiple copying into kernel and out Variable-length messages: more difficult to implement - may need a heap with garbage collection more overheads and less efficient, memory fragmentation more convenient for user processes

11 Basic Communication Primitives
Two generic message passing primitives for sending and receiving messages. send (destination, message) send(destination, message) receive(source, message) channel naming = process name, link, mailbox, port Addressing - Direct and Indirect 1) Direct Send/ Receive communication primitives direct communication: symmetric/asymmetric process naming, link Symmetric Addressing : Both the processes have to explicitly name in the communication primitives. Asymmetric Addressing : Only sender needs to indicate the recipient.

12 Naming of links - direct and indirect communications
each process wanting to communicate must explicitly name the recipient or sender of the communication send and receive primitives defined: send ( P, message ) : send a message to process P receive ( Q, message ) : receive a message from process Q a link established automatically between every pair of processes that want to communicate processes only need to know each other’s identity link is associated with exactly two processes link is usually bidirectional but can be unidirectional Process A Process B while (TRUE) { while (TRUE) { produce an item receive ( A, item ) send ( B, item ) consume item } }

13 Asymmetric addressing:
only the sender names the recipient recipient not required to name the sender - need not know the sender send ( P, message ) : send message to process P receive ( id, message ) : receive from any process, id set to sender Disadvantage of direct communications : limited modularity - changing the name of a process means changing every sender and receiver process to match need to know process names Indirect communications : messages sent to and received from mailboxes (or ports) mailboxes can be viewed as objects into which messages placed by processes and from which messages can be removed by other processes each mailbox has a unique ID two processes can communicate only if they have a shared mailbox

14 a link can be either unidirectional or bidirectional
send ( A, message ) : send a message to mailbox A receive ( A, message ) : receive a message from mailbox A a communications link is only established between a pair of processes if they have a shared mailbox a pair of processes can communicate via several different mailboxes if desired a link can be either unidirectional or bidirectional a link may be associated with more than two processes allows one-to-many, many-to-one, many-to-many communications one-to-many : any of several processes may receive from the mailbox e.g. a broadcast of some sort which of the receivers gets the message? arbitrary choice of the scheduling system if many waiting? only allow one process at a time to wait on a receive many-to-one : many processes sending to one receiving process e.g. a server providing service to a collection of processes file server, network server, mail server etc. receiver can identify the sender from the message header contents

15 many-to-many : e.g. multiple senders requesting service and a pool of receiving servers offering service - a server farm Mailbox Ownership process mailbox ownership : only the process may receive messages from the mailbox other processes may send to the mailbox mailbox can be created with the process and destroyed when the process dies process sending to a dead process’s mailbox will need to be signalled or through separate create_mailbox and destroy_mailbox calls possibly declare variables of type ‘mailbox’ system mailbox ownership : mailboxes have their own independent existence, not attached to any process dynamic connection to a mailbox by processes for send and/or receive

16 Buffering - the number of messages that can reside in a link temporarily
Zero capacity - queue length 0 sender must wait until receiver ready to take the message Bounded capacity - finite length queue messages can be queued as long as queue not full otherwise sender will have to wait Unbounded capacity any number of messages can be queued - in virtual space? sender never delayed Copying need to minimize message copying for efficiency copy from sending process into kernel message queue space and then into receiving process? probably inevitable in a distributed system advantage that communicating processes are kept separate malfunctions localized to each process

17 Synchronized versus Asynchronous Communications
send and receive operations blocking sender is suspended until receiving process does a corresponding read receiver suspended until a message is sent for it to receive properties : processes tightly synchronized - the rendezvous of Ada effective confirmation of receipt for sender at most one message can be outstanding for any process pair no buffer space problems easy to implement, with low overhead disadvantages : sending process might want to continue after its send operation without waiting for confirmation of receipt receiving process might want to do something else if no message is waiting to be received

18 Synchronized versus Asynchronous Communications
send and receive operations non-blocking sender continues when no corresponding receive outstanding receiver continues when no message has been sent properties : messages need to be buffered until they are received amount of buffer space to allocate can be problematic a process running amok could clog the system with messages if not careful often very convenient rather than be forced to wait particularly for senders can increase concurrency some awkward kernel decisions avoided e.g. whether to swap a waiting process out to disc or not receivers can poll for messages i.e. do a test-receive every so often to see if any messages waiting interrupt and signal programming more difficult preferable alternative perhaps to have a blocking receive in a separate thread

19 Blocking message passing
Sending process must wait after send until an acknowledgement is made by the receiver Receiving process must wait for expected message from sending process A form of synchronization Receipt determined by polling common buffer by interrupt

20 Blocking Send and Receive Primitives: No Buffer. (Galli, p.58)

21 Blocking Send and Receive Primitives with Buffer. (Galli, p.58)

22 Non-blocking message passing
Asynchronous communication Sending process may continue immediately after sending a message -- no wait needed Receiving process accepts and processes message -- then continues on Control buffer -- receiver can tell if message still there interrupt

23 Direct Send / Receive communication Primitives
Links Asymmetric Process Name Symmetric Process Name

24 2) Indirect Send/ Receive communication primitives
Messages are not sent directly from sender to receiver, but sent to shared data structure. Many-to-many mailbox, many-to-one port Multiple clients might request services Multi Point Connection from one of multiple servers. We use mail boxes. Abstraction of a finite size FIFO queue maintained by kernel. Multi Path Connection

25 Implicit Addressing for Interprocess Communication. (Galli, p.59)

26 Explicit Addressing for Interprocess Communication. (Galli,p.60)

27 Synchronization and Buffering
These are the three typical combinations. 1) Blocking Send, Blocking Receive Both receiver and sender are blocked until the message is delivered. (provides tight synchronization between processes) 2) Non Blocking Send, Blocking Receive Sender can continue the execution after sending a message, the receiver is blocked until message arrives. (most useful combination) 3) Non Blocking Send, Non Blocking Receive Neither party waits.

28 Other combinations : non-blocking send + blocking receive probably the most useful combination sending process can send off several successive messages to one or more processes if need be without being held up receivers wait until something to do i.e. take some action on message receipt e.g. a server process might wait on a read until a service request arrived, then transfer the execution of the request to a separate thread then go back and wait on the read for the next request blocking send + non-blocking receive conceivable but probably not a useful combination in practice, sending and receiving processes will each choose independently Linux file access normally blocking : to set a device to non-blocking (already opened with a descriptor fd) : fcntl ( fd, F_SETFL, fcntl ( fd, F_GETFL) | O_NDELAY )

29 Message Synchronization Stages and buffering
1.Nonblocking send, 1+8 : Sender process is released after message has been composed and copied into sender’s kernel (local system call) 2. Blocking send, : Sender process is released after message has been transmitted to the network 3. Reliable blocking send, : Sender process is released after message has been received by the receiver’s kernel (kernel receives network ACK). 4. Explicit blocking send, : Sender process is released after message has been received by the receiver process (kernel receives kernel delivery ACK) 5. Request and reply, 1-4, service, 5-8 : Sender process is released after message has been processed by the receiver and response returned to the sender

30 The Producer Consumer Problem
The producer-consumer problem illustrates the need for synchronization in systems where many processes share a resource. In the problem, two processes share a fixed-size buffer. One process produces information and puts it in the buffer, while the other process consumes information from the buffer. These processes do not take turns accessing the buffer, they both work concurrently. Herein lies the problem. What happens if the producer tries to put an item into a full buffer? What happens if the consumer tries to take an item from an empty buffer?

31 Producer

32 Consumer

33 Pipe & Socket API’s Pipe
More convenient to the users and to the system if the communication is achieved through a well defined set of standard API’s. Pipe Pipes are implemented with finite size, FIFO byte stream buffer maintained by the kernel. Used by 2 communicating processes, a pipe serves as unidirectional communication link so that one process can write data into tail end of pipe while another process may read from head end of the pipe. Pipe is created by a system call which returns 2 file descriptors, one for reading and another for writing. Pipe concept can be extended to include messages. For unrelated processes, there is need to uniquely identify a pipe since pipe descriptors cannot be shared. So concept of Named pipes. With a unique path name, named pipes can be shared among disjoint processes across different machines with a common file system.

34 Pipes, continued “created by a pipe system call, which returns two pipe descriptors (similar to a file descriptor), one for reading and the other for writing … using ordinary write and read operations” (C&J) “exists only for the time period when both reader and writer processes are active” (C&J) “the classical producer and consumer IPC problem” (C&J)

35 Interprocess Communication Using Pipes. (Galli, p.63)

36 Unnamed pipes “Pipe descriptors are shared by related processes” (e.g. parent, child) Such a pipe is considered unnamed Cannot be used by unrelated processes a limitation

37 Named pipes “For unrelated processes, there is a need to uniquely identify a pipe since pipe descriptors cannot be shared. One solution is to replace the kernel pipe data structure with a special FIFO file. Pipes with a path name are called named pipes.” “Since named pipes are files, the communicating processes need not exist concurrently”

38 Named pipes, continued “Use of named pipes is limited to a single domain within a common file system.” a limitation …. Therefore, sockets….

39 Sockets “a communication endpoint of a communication link managed by the transport services” “created by making a socket system call that returns a socket descriptor for subsequent network I/O operations, including file-oriented read/write and communication-specific send/receive”

40 Sockets, continued “A socket descriptor is a logical communication endpoint (LCE) that is local to a process; it must be associated with a physical communication endpoint (PCE) for data transport. A physical communication endpoint is specified by a network host address and transport port pair. The association of a LCE with a PCE is done by the bind system call.” (C&J)

41 Types of socket communication
Unix local domain a single system Internet world-wide includes port and IP address

42 Types , continued Connection-oriented uses TCP
“a connection-oriented reliable stream transport protocol” (C&J) Connectionless uses UDP “a connectionless unreliable datagram transport protocol”

43 Connectionless socket communication
peer process: application-level process - application protocol • LCE: Logical Communication Endpoint - established with socket call • PCE: Physcial Communication Endpoint - (a.k.a. endpoint in network) (Transport TSAP/L4SAP, Network NSAP/L3SAP) pair bound to LCE with bind() call • Network: Accessed by sendto()/recvfrom() primitives

44 Connectionless socket communication
Peer Processes Peer Processes socket socket LCE LCE bind bind PCE Sendto /recvfrom PCE

45 Connection-oriented socket communication
Server Client socket socket bind listen rendezvous connect accept request read write reply write read

46 CS-551, Lecture 3

47 Group addressing One-to-many one sender, multiple receivers broadcast
Many-to-one multiple senders, but only one receiver Many-to-many difficult to assure order of messages received

48 Figure 3.5 One-to-Many Group Addressing. (Galli, p.61)

49 Secure Socket Layer (SSL)
Provides Privacy, Integrity and Authenticity. Authentication is done by third-party certification authority. Privacy and integrity are maintained by handshake protocol and cryptography.

50 Asymmetric - Client and Server

51 Asymmetric - Client and Server

52 Secure Socket Layer protocol
Privacy: use symmetric private-key cryptography Integrity: use message integrity check Authenticity: use asymmetric public-key cryptography

53 Secure Socket Layer protocol

54 Secure Socket Layer protocol
Server accepts connection, selects cipher suite both can use (if any), provides its public key in a signed certificate • Client verifies Server public key certificate • Client and Server exchange public information to establish shared secret • Client and Server initialize hash key, session encryption key • Either Client or Server may terminate secure connection

55 Group communication and multicast
Reliability of message delivery – Best effort – Duplicate detection – Omission detection/recovery per receiver – All or none (atomic) to all receivers • Orderly delivery – FIFO (per sender): Multicast from a single source and delivery in sent order. – Causal order: Delivery in casual order – Total order : Messages are delivered in same order for all members in a group.

56

57 (a) Single sender/single group - reliable, ordered deliver (FIFO)
(b) Multiple senders/single group - order between senders’ messages? (c-L) Single sender/overlapping groups - consistency of order of messages sent to different groups for nodes in intersection (c-R) Multiple, single group senders/overlapping groups – consistency of order of messages for nodes in intersection (d-L) Multiple, multi-group senders/independent groups - issues of (b) plus consistency of order in Group 1 and Group 2 (d-R) Multiple, multi-group senders/overlapping groups - issues of (d-L) plus consistency of order for nodes in intersection of Group 1 and Group 2

58 Causal order • Accept message m if Ti = Si + 1 and Tk ≤ Sk for all k 6= i. • Delay message m if Ti > Si+1 or there exists a k 6= i such that Tk > Sk. • Reject the message if Ti ≤ Si.

59 Causal order

60 Total order

61 Multicast or Group Communication
Reliability (Best effort vs. Reliable) Delivery Order (FIFO, Causal Order, Total Order) Failure of recipient(s) vs. Failure of originator Overlapping groups

62 Delivery in causal order
BIRMAN-SCHIPER-STEPHENSON Protocol The algorithm is very similar to the vector logical clock. Each message is time stamped by a vector where each entry in the vector is the number of messages received by the sender from that group member. Accept a message from process i if a) you have received all previous messages from i and b) you have received all messages seen by i. Otherwise,delay accepting the message. Reject any duplicated message.

63 Delivery in causal order
SCHIPER-EGGLI-SANDOZ Protocol Does not require broadcasting by processes Data Structures 1. Each process P has a vector V_P of size N-1. Each element of V_P is an ordered pair (P’,t) where P’ is id of destination process of a message and t is a time stamp. 2. t_M is logical time at the sending of message M 3. t_P current logical time at processor P

64 Delivery in causal order
SCHIPER-EGGLI-SANDOZ Protocol Actions: 1. Sending of a message M from process P_1 to process P_2 Send M with timestamp t_M along with V_{P_1} to P_2. Insert pair <P_2,t_M> into V_{P_1}. Any future message carrying the pair <P_2,t_M> from any processor cannot be delivered to P_2 until t_M < t_{P_2} 2. Arrival of a message M at process P_2 if V_M does not contain any pair (P_2,t) then deliver M else if t > t_{P_2} buffer M else deliver M. Update V_{P_2} with V_M and update P_2’s clock.

65 Delivery in total order
Atomic multicast alongwith total order delivery provided by two-phase, total-order multicast. Originator: Send the message, collect acknowledgements with time stamp. Send the commit message with the highest logical ack time stamp (commit stamp). Recipient: Send acknowledgement with the logical clock value as time stamp (local ack stamp). Do not deliver a message with commit stamp t until the commit message for all messages with local ack stamp < t has been committed. Deliver messages in the order of the commit stamp.


Download ppt "“Inter Process Communication and Coordination”"

Similar presentations


Ads by Google