Monitors Chapter 7.

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
– R 7 :: 1 – 0024 Spring 2010 Parallel Programming 0024 Recitation Week 7 Spring Semester 2010.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch 7 B.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 13: October 12, 2010 Instructor: Bhuvan Urgaonkar.
Monitors Chapter 7. The semaphore is a low-level primitive because it is unstructured. If we were to build a large system using semaphores alone, the.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
Process Synchronization
Monitors CSCI 444/544 Operating Systems Fall 2008.
02/23/2004CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
02/25/2004CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 2 (19/01/2006) Instructor: Haifeng YU.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Semaphores, Locks and Monitors By Samah Ibrahim And Dena Missak.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
6.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Module 6: Process Synchronization.
CS4315A. Berrached:CMS:UHD1 Process Synchronization Chapter 8.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Semaphore Synchronization tool that provides more sophisticated ways (than Mutex locks) for process to synchronize their activities. Semaphore S – integer.
Auburn University COMP 3500 Introduction to Operating Systems Synchronization: Part 4 Classical Synchronization Problems.
CSCI 511 Operating Systems Chapter 5 (Part C) Monitor
Concurrency: Mutual Exclusion and Synchronization
Chapter 5: Process Synchronization (Con’t)
Monitors Chapter 7.
Semaphore Originally called P() and V() wait (S) { while S <= 0
Process Synchronization
Module 7a: Classic Synchronization
Lecture 2 Part 2 Process Synchronization
Critical section problem
Chapter 7: Synchronization Examples
Semaphores Chapter 6.
Monitors Chapter 7.
Concurrency: Mutual Exclusion and Process Synchronization
Conditions for Deadlock
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
CSE 451: Operating Systems Winter Module 7 Semaphores and Monitors
CSE 153 Design of Operating Systems Winter 19
Process/Thread Synchronization (Part 2)
Presentation transcript:

Monitors Chapter 7

Semaphores Drawbacks The semaphore is a low-level primitive because it is unstructured. If we were to build a large system using semaphores alone, the responsibility for the correct use of the semaphores would be diffused among all the implementers of the system.

Semaphores Drawbacks (con) If one of them forgets to call signal(S) after a critical section, the program can deadlock and the cause of the failure will be difficult to isolate.

Monitors Monitors provide a structured concurrent programming primitive that concentrates the responsibility for correctness into modules. Monitors are a generalization of the kernel or supervisor found in operating systems, where critical sections such as the allocation of memory are centralized in a privileged program. Applications programs request services which are performed by the kernel. Kernels are run in a hardware mode that ensures that they cannot be interfered with by applications programs.

Monitors and OOP Monitors have become an extremely important synchronization mechanism because they are a natural generalization of the object of object-oriented programming, which encapsulates data and operation declarations within a class. At runtime, objects of this class can be allocated, and the operations of the class invoked on the fields of the object.

Monitors VS OOP The monitor adds the requirement that only one process can execute an operation on an object at any one time. Furthermore, while the fields of an object may be declared either public (directly accessible outside the class) or private (accessible only by operations declared within the class), the fields of a monitor are all private. Together with the requirement that only one process at a time can execute an operation, this ensures that the fields of a monitor are accessed consistently.

Atomicity of monitor operations

Critical Section Control

Monitor VS Semaphore The statements of the critical section are encapsulated in the monitor rather than replicated in each process. The synchronization is implicit and does not require the programmers to correctly place wait and signal statements.

Monitor Work The monitor is a static entity, not a dynamic process. It is just a set of operations that "sit there" waiting for a process to invoke one of them. There is an implicit lock on the "door" to the monitor, ensuring only one process is "inside" the monitor at any time. A process must open the lock to enter the monitor; the lock is then closed and remains closed until the process leaves the monitor.

Starvation In Monitors As with semaphores, if there are several processes attempting to enter a monitor, only one of them will succeed. There is no explicit queue associated with the monitor entry, so starvation is possible. In our examples, we will declare single monitors, but in real programming languages, a monitor would be declared as a type or a class and you can allocate as many objects of the type as you need.

Synchronization In Monitors. In one approach, the required condition is named by an explicit condition variable (sometimes called an event). Ordinary Boolean expressions are used to test the condition, blocking on the condition variable if necessary; a separate statement is used to unblock a process when the condition becomes true. The alternate approach is to block directly on the expression and let the implementation implicitly unblock a process when the expression is true.

Semaphore Simulated With a Monitor

Monitor Implementation The monitor implementation follows directly from the definition of semaphores. The integer component of the semaphore is stored in the variable s. The condition variable cond implements the queue of blocked processes. By convention, condition variables are named with the condition you want to be true. waitC(notZero) is read as "wait for notZero to be true," signalC(notZero) is read as "signal that notZero is true." The names waitC and signalC are intended to reduce the possibility of confusion with the similarly-named semaphore statements.

Monitor Implementation If the value of s is zero, the process executing Sem.wait—the simulated semaphore operation—executes the monitor statement waitC(notZero).  The process is said to be blocked on the condition.

Operations on Condition Variables Process p is blocked on the queue cond. Process p leaves the monitor, releasing the lock that ensures mutual exclusion in accessing the monitor. With each condition variable is associated a FIFO queue of blocked processes. Here is the definition of the execution of the atomic operations on condition variables by an arbitrary process p: If the queue cond is nonempty, the process at the head of the queue is unblocked.

Operations: Semaphore VS Monitor

The Producer–Consumer Problem

The Immediate Resumption Requirement

The Problem of the Readers and Writers Readers Processes which are required to exclude writers but not other readers. Writers Processes which are required to exclude both readers and other writers.

Readers and writers with a monitor

Dining Philosophers (outline)

Dining Philosophers

The correctness properties are A philosopher eats only if she has two forks. Mutual exclusion: no two philosophers may hold the same fork simultaneously. Freedom from deadlock. Freedom from starvation (pun!). Efficient behavior in the absence of contention.

Dining philosophers (first attempt)

Dining philosophers (second attempt)

Dining philosophers (third attempt)

A Monitor Solution for the Dining Philosophers

Monitors in JAVA The END

Producer Consumer class PCMonitor { final int N = 5; int Oldest = 0, Newest = 0; volatile int Count = 0; int Buffer[] = new int[N]; synchronized void Append(int V) { while (Count == N) try { wait(); } catch (InterruptedException e) {} Buffer[Newest] = V; Newest = (Newest + 1) % N; Count = Count + 1; notifyAll(); } synchronized int Take() { int temp; while (Count == 0) temp = Buffer[Oldest]; Oldest = (Oldest + 1) % N; Count = Count - 1; return temp;