© Janice Regan, CMPT 300, May 2007 0 CMPT 300 Introduction to Operating Systems The producer consumer problem Monitors and messaging.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Deadlocks, Message Passing Brief refresh from last week Tore Larsen Oct
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CY2003 Computer Systems Lecture 05 Semaphores - Theory.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Enforcing Mutual Exclusion, Semaphores. Four different approaches Hardware support Disable interrupts Special instructions Software-defined approaches.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
1 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CY2003 Computer Systems Lecture 06 Interprocess Communication Monitors.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Classical problems.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion.
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
4061 Session 21 (4/3). Today Thread Synchronization –Condition Variables –Monitors –Read-Write Locks.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-4 Process Communication Department of Computer Science and Software.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 10 Processes II Read.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
Problems with Semaphores Used for 2 independent purposes –Mutual exclusion –Condition synchronization Hard to get right –Small mistake easily leads to.
Chapter 7 - Interprocess Communication Patterns
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating System Chapter 5. Concurrency: Mutual Exclusion and Synchronization Lynn Choi School of Electrical Engineering.
Synchronicity II Introduction to Operating Systems: Module 6.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Operating System Concepts and Techniques Lecture 14 Interprocess communication-3 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
Semaphores Reference –text: Tanenbaum ch
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion Mutexes, Semaphores.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
Semaphores Reference text: Tanenbaum ch
CS510 Operating System Foundations
Chapter 5: Process Synchronization
CS510 Operating System Foundations
Inter-Process Communication and Synchronization
Lecture 13: Producer-Consumer and Semaphores
Concurrency: Mutual Exclusion and Synchronization
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
Chapter 6 Synchronization Principles
Lecture 13: Producer-Consumer and Semaphores
Semaphores Reference text: Tanenbaum ch
“The Little Book on Semaphores” Allen B. Downey
Presentation transcript:

© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems The producer consumer problem Monitors and messaging

© Janice Regan, CMPT 300, May The producer consumer problem  Shares a limited resource, for example an input or output buffer, that may contain one or more storage locations  Basic idea:  a producer that creates data (puts it in the buffer)  a consumer that uses that data (takes it out of the buffer)  Allow the two to collaborate using one of our mutual exclusion methods

© Janice Regan, CMPT 300, May Producer consumer: sleep/wakeup  Consumer puts itself to sleep when there is no data to consume (count = 0)  The consumer does not extract the data until after it is woken up by the producer and told there is now data to read.  The consumer does not decrement the count until after it has taken a data value from the buffer  Consumer is woken up by the producer after the producer puts the first piece of data into the buffer (count = 1)

© Janice Regan, CMPT 300, May Producer consumer: sleep/wakeup  Producer puts itself to sleep when the buffer is full (count = N),  The producer has already made the next data value and is waiting to place it into the buffer when it goes to sleep  Producer is woken up by the consumer as soon as there is a free location in the buffer (count = N-1)  The producer does not increment the count until after it has placed the data in the buffer

© Janice Regan, CMPT 300, May Producer consumer: sleep/wakeup int count = 0; // # items in buffer void producer ( ) { int item; while (true) { produce(&item); if (count == N) { sleep( ); } putInBuf( ); count++; if(count==1) { wakeup(consumer); } } } void consumer ( ) { int item; while (true) { if (count == 0) { sleep( ); } removeFromBuf(&item); count - -; if(count==N-1) { wakeup(producer); } consume(item); } }

© Janice Regan, CMPT 300, May Things you may not notice  The producer goes to sleep if the buffer is full (before adding the N+1 st item)  The consumer goes to sleep if only if there are no items in the queue  The consumer takes items from the buffer only when there are items to be taken.  When wakeup( ) is called and the process is not asleep, nothing is done (signal lost)

© Janice Regan, CMPT 300, May Problem  If we try to be efficient and only wake up when the queue is full/empty we can have deadlock due to a race condition  Begin with an empty buffer  The consumer reads counts, finds that it is 0  The producer is loaded by the scheduler  The producer inserts an item in the buffer  The producer increments count  The producer wakes up the consumer (the signal is lost since the consumer is not asleep)  The consumer resumes and goes to sleep  When the producer fills the buffer it goes to sleep

© Janice Regan, CMPT 300, May Producer consumer: sleep/wakeup int count = 0; // # items in buffer void producer ( ) { int item; while (true) { produce(&item); if (count == N) { sleep( ); } putInBuf( ); count++; if(count==1) { wakeup(consumer); } } } void consumer ( ) { int item; while (true) { if (count == 0) { sleep( ); } removeFromBuf(&item); count - -; if(count==N-1) { wakeup(producer); } consume(item); } }

© Janice Regan, CMPT 300, May Producer consumer: sleep/wakeup int count = 0; // # items in buffer void producer ( ) { int item; while (true) { produce(&item); if (count == N) { sleep( ); } putInBuf( ); count++; wakeup(consumer); } void consumer ( ) { int item; while (true) { if (count == 0) { sleep( ); } removeFromBuf(&item); count - -; wakeup(producer); consume(item); } One approach: always signal when data is put into or taken out of buffer: will usually fix the problem but is inefficient

© Janice Regan, CMPT 300, May Producer consumer: semaphores semaphore count = 0; // # items in buffer semaphore empty = N; // # empty slots in buffer semaphore protect = 0; // to protect critical sections void producer ( ) { int item; while (true) { produce(&item); semWait(&empty); mutexWait(&protect); putInBuf( ); mutexSignal(&protect); semSignal(&count); } void consumer ( ) { int item; while (true) { semWait(&count); mutexWait(&protect); removeFromBuf(&item); mutexSignal(&protect); semSignal(&empty) consume(item); }

© Janice Regan, CMPT 300, May Things to notice  One binary semaphore (protect) is used to protect the I/0 operation of adding or removing an item from the shared buffer from interruption  One counting semaphore is used to determine how many objects are in the shared buffer  One counting semaphore is used to determine how many empty slots are available in the shared buffer  The counting semaphores are used for synchronization (how many items in buffer, how many spaces in buffer)

© Janice Regan, CMPT 300, May Producer-consumer  How did we arrive at this working solution for the producer consumer problem using semaphores.  Let’s develop it step by step so we understand it better.  What are we trying to do?  We want to fix the problem of the lost wakeup signal in the sleep/wakup approach.  We want to protect the shared memory  We want to keep track of how much of the shared buffer is in use

© Janice Regan, CMPT 300, May Start simple  How do we protect a shared variable? Use a binary semaphore to implement mutual exclusion and protect the critical regions where it is modified or accessed  In our example the critical regions are  Where the producer adds data to a buffer  Where the consumer removes data from the same circular buffer

© Janice Regan, CMPT 300, May Mutex wait operation: go to sleep  The mutexWait operation  Checks the semaphore value  If the value is 1, the value is changed to 0 and the process is allowed to run its critical region  If the value is 0 then the process is blocked (put to sleep) and placed in the blocked queue

© Janice Regan, CMPT 300, May Mutex signal operation: wakeup  The mutexSignal operation  Checks if there are any processes that are presently blocked (in the blocked queue)  If there are processes in the blocked queue,unblock the first process in the queue  If there are no processes in the blocked queue set the semaphore value to 1

© Janice Regan, CMPT 300, May First step of solution semaphore protect = 0; // to protect critical sections void producer ( ) { int item; while (true) { produce(&item); mutexWait(&protect); putInBuf( ); mutexSignal(&protect); } void consumer ( ) { int item; while (true) { mutexWait(&protect); removeFromBuf(&item); mutexSignal(&protect); consume(item); }

© Janice Regan, CMPT 300, May Next step of solution  This is a good start but we are only protecting the buffer from simultaneous access.  We can partially solved the problem for a buffer of infinite length by using a circular buffer of finite size  We are not checking to see if the buffer is full. The producer can continue adding data when the buffer is full and in the process overwrite data the consumer has not yet accessed.  We are not checking if the buffer is empty, The consumer can access locations that have not had data inserted yet and in the process can re-access old data or access undefined data  What about synchronization?  Lets add a counting semaphore to control the access of the producer to the shared buffer

© Janice Regan, CMPT 300, May Semaphore wait operation:  The semWait operation  Decrements the semaphore value  If the value is >=0, the process is allowed to run its critical region  If the value is negative then the process is blocked (put to sleep) and placed in the blocked queue

© Janice Regan, CMPT 300, May Semaphore signal operation:  The semSignal operation  Increments the semaphore value  If the semaphore value is not positive (<=0) the first process in the blocked queue is woken up and placed in the ready queue

Circular buffer  Circular buffer K locations, start location zero,  Entry K+1 will be placed in location zero  Next step makes sure data locations that do not contain data cannot have that non-existant data consumed © Janice Regan, CMPT 300, May

© Janice Regan, CMPT 300, May Adding a counting semaphore  When the producer runs it increments the counting semaphore by calling semSignal (one more available resource, another data value added).  The semaphore is incremented after the data value is put in the buffer. If it is incremented before it is possible for the consumer to try to access the new data before it has been inserted into the buffer  When the consumer runs it decrements the counting semaphore (one less available resource, another data value removed)  The consumer must execute semWait to see if there is data to consume before trying to consume it

© Janice Regan, CMPT 300, May Next step of solution semaphore protect = 0; // to protect critical sections semaphore count = 0; // # items in buffer void producer ( ) { int item; while (true) { produce(&item); mutexWait(&protect); putInBuf( ); mutexSignal(&protect); semSignal(&count); } } void consumer ( ) { int item; while (true) { semWait(&count); mutexWait(&protect); removeFromBuf(&item); mutexSignal(&protect); consume(item); } } count = 0 count = 1 count = -1

© Janice Regan, CMPT 300, May The potential problem + solution  The consumer must execute semWait to see if there is data to consume before trying to consume it  This means semaphore is decremented before the data has been removed from the buffer. A potential problem  If the consumer is interrupted after the semWait() call and the producer is allowed to run, the producer can overwrite the data before the consumer has accessed it  Need a second counting semaphore to keep track of the number of empty locations

Circular buffer  Circular buffer K locations, start location zero,  Entry K+1 will be placed in location zero  Next step makes sure data locations that contain data cannot be overwritten before the data is consumed © Janice Regan, CMPT 300, May Note than element 0 of the circular array is the same location as element N+1 of the circular array

semaphore protect = 0; // to protect critical sections semaphore count = 0; // # items in buffer void producer ( ) { int item; while (true) { produce(&item); mutexWait(&protect); putInBuf( ); mutexSignal(&protect); semSignal(&count); } } count = N © Janice Regan, CMPT 300, May Problem: Next step of solution void consumer ( ) { int item; while (true) { semWait(&count); mutexWait(&protect); removeFromBuf(&item); mutexSignal(&protect); consume(item); } } count = N count = N-1 New data in location N-1 New data in location N-1 used, not data that was available when semWait() was called

© Janice Regan, CMPT 300, May Producer consumer: semaphores semaphore count = 0; // # items in buffer semaphore empty = N; // # empty slots in buffer semaphore protect = 0; // to protect critical sections void producer ( ) { int item; while (true) { produce(&item); semWait(&empty); mutexWait(&protect); putInBuf( ); mutexSignal(&protect); semSignal(&count); } void consumer ( ) { int item; while (true) { semWait(&count); mutexWait(&protect); removeFromBuf(&item); mutexSignal(&protect); semSignal(&empty) consume(item); }

© Janice Regan, CMPT 300, May Monitors  We can see that working with semaphores and mutexes may not be simple  In fact debugging code using semaphores and mutexes can be very difficult as errors depend on particular orders of execution that may happen only rarely  Some programming languages (like Java) provide tools called monitors that can be used to simplify the management of resources needing synchronization or mutual exclusion. Most (like C) do not.  Monitors add complexity to the compiler (additional packages). The compiler must now be aware of  all mutual exclusion rules (it implements mutual exclusion)  OS dependent system calls ( it implements synchronization)

© Janice Regan, CMPT 300, May Monitor  A monitor is a software module  Procedures, initialization process, Data (resources)  Data (resources) accessible ONLY by calling the monitor’s procedures not directly available to any outside applications  A process enters the monitor and accesses the data (resource) by calling one of the monitor’s procedures  Only one process may be executing in the monitor at a time  Any other process that invokes the monitor is blocked until the monitor becomes available

Monitors  Identify a critical region for one or more resources.  Create a Java monitor type class. The methods in the class will execute the critical regions of your processes.  Only one process in a monitor class may execute at any given time. If it calls wait another process may enter and execute  When a process is done it signals, After signaling it exits the monitor immediately (Brinch Hansen). This requires that a signal call be the last statement in the process After signaling it is suspended (Hoarse) A process already in the monitor may be signaled (resume) A new process may be admitted to the monitor © Janice Regan, CMPT 300, May

© Janice Regan, CMPT 300, May Monitor Signals  Monitor will include procedures to provide synchronization signals  Wait( condition variable ) Immediately block (suspend execution in the monitor)  Signal( condition variable ) Resume execution of a process blocked after a Wait() call on the same condition. If there are no such processes do nothing

© Janice Regan, CMPT 300, May Monitor Signals  A monitor uses condition variables which are operated upon by the procedures.  Condition variables are not counters  If a process in a monitor signals and no task is waiting on that condition variable, the signal is lost.  The Wait must come before the Signal

© Janice Regan, CMPT 300, May Monitors: Implementation  You write a monitor (a special process) using the Wait() Signal() and special condition variables provided by the compiler  Still can be difficult to debug (perhaps a little easier)  Your monitor uses the compilers ability to implement mutual exclusion based on a few simple procedure calls

© Janice Regan, CMPT 300, May Producer consumer: monitor monitor boundedbuffer; char buffer[N]; int nextin, nextout, count cond notfull, notempty; void append(char x) { if(count == N) { cwait(notfull) } buffer[nextin] = x; nextin = (nextin+1)%N; count++; csignal(notempty); } After Stallings2005: language Mesa void take(char x) { if(count == 0) { cwait(notempty) } x = buffer[nextout]; nextout = (nextout+1)%N; count--; csignal(notfull); }

© Janice Regan, CMPT 300, May Producer-consumer: using monitor void producer( ); char x; { while(true) { produce(x); append(x); } void consumer( ); char x; { while(true) { take(x); consume(x); } After Stallings2005. Language Mesa

© Janice Regan, CMPT 300, May Message Passing  Messaging works with distributed systems as well as with shared memory multiprocessors.  Messaging enables synchronization (cannot receive before send), mutual exclusion can be enforced  Messaging also provides communication, an alternate way to share information.  Use system calls rather than semaphores and shared variables (source may be specified for cooperating processes value or returned for a instance server etc)  send(destination, &message):  receive(source, & message);  This is a minimum set of system calls

© Janice Regan, CMPT 300, May Types of system calls  Send and receive system call may be either blocking or non-blocking.  Blocking system calls either Block and wait for the next message to arrive if no messages are waiting in the queue Immediately process the first message waiting in the queue  Non-blocking system calls Immediately send/receive a message

© Janice Regan, CMPT 300, May Combinations: send + receive  Blocking send, blocking receive  Sometimes called synchronous communication or a rendezvous  Non-blocking send, blocking receive  Most common, arguably the most useful  Sender immediately sends message when a message is ready to send  Receiver blocks if there is no message waiting to be received.  Allows messages to be sent as quickly as possible  Makes processes needing input from messages wait for those messages  Non-blocking send, non-blocking receive

© Janice Regan, CMPT 300, May Lost Messages  For cases with a non blocking send the received message queue will have a fixed finite length. Any number of messages may be sent. If queue is full when a message arrives it may be dropped  Messages may be lost between the sender and the receiver for several reasons including errors during transmission. (attenuation, interference)  Attenuation: power decreases with distance  Interference: signal contaminated by external noise  How do we deal with lost messages?

© Janice Regan, CMPT 300, May Communication Protocol  A set of rules that define how to manage messages for a particular combination of blocking and non-blocking sends and receives is called a communications protocol.  A communications protocol defines how messages are constructed, sent, received, processed, and how lost messages are dealt with.  There are many different communications protocols used for different applications.  Lets consider some examples

© Janice Regan, CMPT 300, May Avoiding lost messages: 1st way  For a system using blocking receive, non-blocking send  When a message is received by and/or queued for a blocking receive an acknowledgment (ack) is sent to the sender of the message  If the message is lost because it does not fit into the queue no ack is sent  The sender will know the message has arrived when it receives the ack.  If it does not receive the ack within a “reasonable” time, it will assume the message was lost and retransmit it  The sender will keep trying until the message is received and queued and the resulting ack is received

© Janice Regan, CMPT 300, May Avoiding lost messages: 1st way A potential problem  What happens if the ack is lost?  The sender cannot tell if the message was lost or the ack was lost  The sender will retransmit the message  Therefore, the receiver may receive multiple copies of the same message.  Most protocols number messages so that these duplicates can be detected and discarded.

© Janice Regan, CMPT 300, May Avoiding lost messages: 1st way Another potential problem  What happens if the ack is late?  Message is retransmitted before the ack is received  Receive more than one ack for a message  number acks so they can be paired with the messages that generated them Duplicate acks can be recognized and ignored

© Janice Regan, CMPT 300, May Avoiding lost messages: 2nd way  Blocking receive, blocking send N = 1  Processes must run synchronously  Initially there is only one empty message the send queue for each process (A and B)  When process A sends a message, process A cannot send more messages until an empty message is added to its send queue  Process B receives the message, empties it and places the empty message into its own send queue

© Janice Regan, CMPT 300, May Avoiding lost messages: 2nd way  Blocking receive, blocking send N = 1  Process B sends a message to process A, by filling one of the empty messages in Bs send queue Bs send queue contains its original empty message and the added empty from the received message before B sends a message.  Process A receives the message from process B, removes the information and places the empty message in process As send queue  We start with one message in each send queue because we do not know which process will need to send a message firs.

© Janice Regan, CMPT 300, May Avoiding lost messages: 2nd way  Blocking receive, blocking send  The system is initialized with N empty messages in each processes send queue.  Now process A can send up to N messages before receiving any messages back from process B. After process A has sent N messages (and received none) then process A will block (not allow any more sends)  Similarly process B can send up to N messages without receiving any messages from process A

© Janice Regan, CMPT 300, May Avoiding lost messages: 2nd way  Blocking receive, blocking send  Use mailboxes. Assume that there are a given finite number of messages in each process’ mailbox in the system, N. The system is initialized with N empty messages in each processes mail box  The mailboxes for send and receive are N long, think of them as mailboxes containing N letters or messages  Thinking in terms of one mailbox for each process

© Janice Regan, CMPT 300, May Avoiding lost messages: 2nd way  Begin by with N empty messages in the mailbox for the sending process.  To send a message take one of the empty messages in the sending process’s mailbox, fill it with information, then send it to the receiving process  The message arrives at the receiving process’s mailbox  A later or pending receive call made by the receiving process will process the message, removing the information  The emptied message will be sent back to the sending process’s mailbox by the receiving process

© Janice Regan, CMPT 300, May Producer consumer: messaging int i; // # items in buffer msg M; // # empty slots in buffer void producer ( ) { while (true) { produce(&item); receive(&M); buildMessage(&M, item); send(consumer, &M); }

© Janice Regan, CMPT 300, May Producer consumer: messaging void consumer ( ) { for ( i = 0; i < N; i++) { send(producer, &M); } while (true) { receive(&M); extractItem(&M, &item); send(producer, &M); consume(item); }

© Janice Regan, CMPT 300, May Addressing messages  Direct addressing  Specific address given in send  Receive may have a source address specified in the call if source of expected messages is known  Receive may return the source address to the destination process after processing the message (multiple sources, unknown at reception time)  Indirect addressing  Addressed to a data structure, a queue (mailbox), rather than directly to the process  Indirect addressing allows sharing of mailboxes between processes. The association of processes to mailboxes can be static or dynamic.

© Janice Regan, CMPT 300, May Association: indirect addressing  One to One: private link between processes  Many to one: client server Process 1 Process 3 Process 1 Process N Process S

© Janice Regan, CMPT 300, May Association: indirect addressing  Many to Many: multiple servers/clients  one to many: broadcast information Process 1 Process N Process S Process 1 Process N Process N+1 Process M

© Janice Regan, CMPT 300, May Other difficulties protocols deal with  Authentication: how do we tell the message actually came from who it claims it did and has not been changed in transmission  Security: how do we tell if the message has been tampered with  Error detection: how do we tell if the message is as it was sent (has not been damaged in transmission, other than tampering)