Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.

Slides:



Advertisements
Similar presentations
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Advertisements

Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Chapter 3 The Critical Section Problem
Parallel Processing (CS526) Spring 2012(Week 6).  A parallel algorithm is a group of partitioned tasks that work with each other to solve a large problem.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 13: October 12, 2010 Instructor: Bhuvan Urgaonkar.
Monitors Chapter 7. The semaphore is a low-level primitive because it is unstructured. If we were to build a large system using semaphores alone, the.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
Process Synchronization
1 Chapter 6: Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2007 Chapter 6 of textbook.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 2 (19/01/2006) Instructor: Haifeng YU.
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
Monitors and Blocking Synchronization Dalia Cohn Alperovich Based on “The Art of Multiprocessor Programming” by Herlihy & Shavit, chapter 8.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CSC321 Concurrent Programming: §4 Semaphores 1 Section 4 Semaphores.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
Operating System Chapter 5. Concurrency: Mutual Exclusion and Synchronization Lynn Choi School of Electrical Engineering.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
6.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Module 6: Process Synchronization.
1 Chapter 11 Global Properties (Distributed Termination)
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 5: Process Synchronization – Part II
Chapter 5: Process Synchronization
Chapter 5: Process Synchronization
Monitors Chapter 7.
Designing Parallel Algorithms (Synchronization)
Critical section problem
Semaphores Chapter 6.
Monitors Chapter 7.
Monitors Chapter 7.
CSE 153 Design of Operating Systems Winter 19
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

Semaphores Chapter 6

Semaphores are a simple, but successful and widely used, construct.

Process State A multiprocessor :more processors than processes. In a multitasking: several processes will have to share the computing resources of a single CPU. Ready process. Running process.

Scheduler Responsible for deciding which of the ready processes should be run. Performing the context switch. Replacing a running process with a ready process and changing the state of the running process to ready.

Arbitrary Interleaving Arbitrary interleaving simply means that the scheduler may perform a context switch at any time, even after executing a single statement of a running process.

Context Switch Every process p has associated with it an attribute called its state, denoted p.state. Assignment to this attribute will be used to indicate a change in the state of a process. For example if p.state = running and q.state = ready, then a context switch between them can be denoted by: – p.state  ready – q.state  running

Synchronization Constructs we will define assume that there exists another process state called blocked. When a process is blocked, it is not ready so it is not a candidate for becoming the running process.

Blocked Process Blocked process can be unblocked or awakened or released only if an external action changes the process state from blocked to ready. At this point the unblocked process becomes a candidate for execution along with all the current ready processes; With one exception, an unblocked process will not receive any special treatment that would make it the new running process.

The following diagram summarizes the possible state changes of a process

Conclusion states Initially, a process is inactive. At some point it is activated and its state becomes ready. When a concurrent program begins executing, one process is running and the others are ready. While executing, it may encounter a situation that requires the process to cease execution and to become blocked until it is unblocked and returns to the ready state. When a process executes its final statement it becomes completed.

Definition of the Semaphore Type A semaphore S is a compound data type with two fields: – S.V of type non-negative integer. – S.L of type set of processes. We are using the familiar dotted notation of records and structures from programming languages. A semaphore S must be initialized with a value k>= 0 for S.V and with the empty set Ф for the S.L:

semaphore S (k, Ф) There are two atomic operations defined on a semaphore S ( where p denotes the process that is executing the statement ):

General semaphore Vs Binary semaphore. A semaphore whose integer component can take arbitrary non-negative values is called a general semaphore. A semaphore whose integer component takes only the values 0 and 1 is called a binary semaphore. A binary semaphore is initialized with (0,Ф) or (1, Ф). The wait(S) instruction is unchanged, but signal(S) is now defined by:

General vs Binary

The Critical Section Problem for Two Processes

Critical section with semaphores (two proc., abbrev.)

State diagram for the semaphore solution

Correctness Mutual exclusion requirement would be a state of the form (p2: signal(S), q2: signal(S),...). We immediately see that no such state exists, so we conclude that the solution satisfies this requirement. There is no deadlock, since there are no states in which both processes are blocked. The algorithm is free from starvation, since if a process executes its wait statement it enters either the state with the signal statement or it enters a state in which it is blocked.

The Critical Section Problem for N Processes

Critical section with semaphores (N proc., abbrev.)

scenario for three processes

Order of Execution Problems

HW What are the possible outputs of the following algorithm? A. B. semaphore S1 ← 0, S2 ← 0 pqr p1: write("p") p2: signal(S1) p3: signal(S2) q1: wait(S1) q2: write("q") q3: r1: wait(S2) r2: write("r") r3: semaphore S ← 1boolean B ← false p q p1: wait(S) p2: B ← true p3: signal(S) p4: q1: wait(S) q2: while not B q3: write("*") q4: signal(S)

Definitions of Semaphores Strong Semaphores. Busy-Wait Semaphores.

Strong Semaphores The semaphore we defined is called a weak semaphore and can be compared with a strong semaphore. The difference is that S.L, the set of processes blocked on the semaphore S, is replaced by a queue: For a strong semaphore, starvation is impossible for any number N of processes. Suppose p is blocked on S.

Busy-Wait Semaphores A busy-wait semaphore does not have a component S.L, so we will identify S with S.V. The operations are defined as follows:

With busy-wait semaphores you cannot ensure that a process enters its critical section

Busy-wait semaphore usage Busy-wait semaphores are appropriate in a multiprocessor system where the waiting process has its own processor and is not wasting CPU time that could be used for other computation. They would also be appropriate in a system with little contention so that the waiting process would not waste too much CPU time.

Semaphores in Java The Semaphore class is defined within the package java.util.concurrent. The terminology is slightly different from the one we have been using: – The integer component of an object of class Semaphore is called the number of permits of the object, and unusually, it can be initialized to be negative. – The wait and signal operations are called acquire and release, respectively.

The acquire operation can throw the exception InterruptedException which must be caught or thrown. The constructor for class Semaphore includes an extra Boolean parameter used to specify if the semaphore is fair or not. A fair semaphore  a strong semaphore, An unfair semaphore  a busy-wait semaphore than a weak semaphore. because the release method simply unblocks a process but does not acquire the lock associated with the semaphore.

Java programs Using semaphores in concurrent programs.

Semaphores in Java

The Producer–Consumer Problem Producers process executes a statement produce to create a data element and then sends this element to the consumer processes. Consumers Upon receipt of a data element from the producer processes, a consumer process executes a statement consume with the data element as a parameter.

As with the critical section and non-critical section statements, the produce and consume statements are assumed not to affect the additional variables used for synchronization and they can be omitted in abbreviated versions of the algorithms.

Producers and consumers are ubiquitous in computing systems

When a data element must be sent from one process to another, the communications can be synchronous, that is, communications cannot take place until both the producer and the consumer are ready to do so.

asynchronous communications in which the communications channel itself has some capacity for storing data elements. This store, which is a queue of data elements, is called a buffer. The producer executes an append operation to place a data element on the tail of the queue. The consumer executes a take operation to remove a data element from the head of the queue.

The use of a buffer allows processes of similar average speeds to proceed smoothly in spite of differences in transient performance. It is important to understand that a buffer is of no use if the average speeds of the two processes are very different, the buffer will always be full and there is no advantage to using one. A buffer can also be used to resolve a clash in the structure of data.

Synchronization Problems There are two synchronization issues that arise in the producer–consumer problem: – first, a consumer cannot take a data element from an empty buffer. – second, since the size of any buffer is finite, a producer cannot append a data element to a full buffer.

producer–consumer (infinite buffer)

Bounded Buffers

Split Semaphores uses a technique called split semaphores. This is not a new semaphore type, but simply a term used to describe a synchronization mechanism built from semaphores. A split semaphore is a group of two or more semaphores satisfying an invariant that the sum of their values is at most equal to a fixed number N. In the case of the bounded buffer, the invariant: notEmpty + notFull = N

holds at the beginning of the loop bodies. In the case that N = 1, the construct is called a split binary semaphore. Split binary semaphores were used for mergesort. Compare the use of a split semaphore with the use of a semaphore to solve the critical section problem. Mutual exclusion was achieved by performing the wait and signal operations on a semaphore within a single process.

In the bounded buffer algorithm, the wait and signal operations on each semaphore are in different processes. Split semaphores enable one process to wait for the completion of an event in another.

Dining Philosophers

The correctness properties are: –A philosopher eats only if she has two forks. –Mutual exclusion: no two philosophers may hold the same fork simultaneously. –Freedom from deadlock. –Freedom from starvation. –Efficient behavior in the absence of contention.

First Attempt Deadlock in the case that all philosophers picked their left fork … none of them will find its right fork.

Second Attempt The first philosopher will starve waiting to get to the room.

Third Attempt

The End