Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.

Slides:



Advertisements
Similar presentations
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Advertisements

1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.
Concurrency: mutual exclusion and synchronization Slides are mainly taken from «Operating Systems: Internals and Design Principles”, 8/E William Stallings.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Computer Systems/Operating Systems - Class 8
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Enforcing Mutual Exclusion, Semaphores. Four different approaches Hardware support Disable interrupts Special instructions Software-defined approaches.
Concurrency: Mutual Exclusion and Synchronization Why we need Mutual Exclusion? Classical examples: Bank Transactions:Read Account (A); Compute A = A +
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
1 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent.
Concurrency CS 510: Programming Languages David Walker.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
1 Concurrency: Deadlock and Starvation Chapter 6.
1 Chapter 6: Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2007 Chapter 6 of textbook.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Operating System 5 CONCURRENCY: MUTUAL EXCLUSION AND SYNCHRONIZATION.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago.
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-4 Process Communication Department of Computer Science and Software.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating System Chapter 5. Concurrency: Mutual Exclusion and Synchronization Lynn Choi School of Electrical Engineering.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Background on the need for Synchronization
G.Anuradha Reference: William Stallings
Concurrency.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
OPERATING SYSTEM CONCEPTS
Concurrency These slides are intended to help a teacher develop a presentation. This PowerPoint covers the entire chapter and includes too many slides.
Concurrency: Mutual Exclusion and Synchronization
Chapter 5: Process Synchronization
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
Operating System 5 CONCURRENCY: MUTUAL EXCLUSION AND SYNCHRONIZATION
Operating Systems Concepts
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Presentation transcript:

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings

“ Designing correct routines for controlling concurrent activities proved to be one of the most difficult aspects of systems programming. The ad hoc techniques used by programmers of early multiprogramming and real-time systems were always vulnerable to subtle programming errors whose effects could be observed only when certain relatively rare sequences of actions occurred. The errors are particularly difficult to locate, since the precise conditions under which they appear are very hard to reproduce.” —THE COMPUTER SCIENCE AND ENGINEERING RESEARCH STUDY, MIT Press, 1980

Operating System design is concerned with the management of processes and threads: Multiprogramming Multiprocessing Distributed Processing

Multiple Applications invented to allow processing time to be shared among active applications Structured Applications extension of modular design and structured programming Operating System Structure OS themselves implemented as a set of processes or threads

Interleaving and overlapping can be viewed as examples of concurrent processing both present the same problems Uniprocessor – the relative speed of execution of processes cannot be predicted depends on activities of other processes the way the OS handles interrupts scheduling policies of the OS

Difficulties of Concurrency Sharing of global resources Difficult for the OS to manage the allocation of resources optimally Difficult to locate programming errors as results are not deterministic and reproducible

Occurs when multiple processes or threads read and write data items The final result depends on the order of execution the “loser” of the race is the process that updates last and will determine the final value of the variable

Two processes share the global variable a – P1 and P2, At some point in its execution, – P1 updates a to the value 1, – P2 updates a to the value 2. Thus, the two tasks are in a race to write variable a. Results – the “loser” of the race (the process that updates last) determines the final value of a.

Two processes share the global variable b, c – P3 and P4, – b = 1 and c = 2. At some point in its execution, – P3: b= b + c – P4: c = b + c. Results – If P3 executes first, b = 3 and c = 5. – If P4 executes first, b = 4 and c = 3.

Operating System Concerns Design and management issues raised by the existence of concurrency: The OS must: be able to keep track of various processes allocate and de-allocate resources for each active process protect the data and physical resources of each process against interference by other processes ensure that the processes and outputs are independent of the processing speed

 Concurrent processes come into conflict when they are competing for use of the same resource  for example: I/O devices, memory, processor time, clock In the case of competing processes three control problems must be faced: the need for mutual exclusion deadlock starvation

Figure 5.1 Illustration of Mutual Exclusion

The enforcement of mutual exclusion creates two additional control problems. deadlock. P1 and P2, and two resources, R1 and R2. Suppose that each process needs access to both resources to perform part of its function. Situation: R1 to P2, and R2 to P1. Each process is waiting for one of the two resources. Neither will release the resource that it already owns until it has acquired the other resource and performed the function requiring both resources. starvation. P1, P2, P3 each require periodic access to resource R. Situation P1 has R, and both P2 and P3 waiting for R. When P1 exits its critical section, P3 access to R. When P3 exits its critical section, P1 access to R. P2 may indefinitely be denied access to the resource, even though there is no deadlock situation.

Must be enforced A process that halts must do so without interfering with other processes No deadlock or starvation A process must not be denied access to a critical section when there is no other process using it No assumptions are made about relative process speeds or number of processes A process remains inside its critical section for a finite time only

There is no way to inspect or manipulate semaphores other than these three operations A variable that has an integer value upon which only three operations are defined: 1)May be initialized to a nonnegative integer value 2)The semWait operation decrements the value 3)The semSignal operation increments the value

Two or more processes can cooperate by means of simple signals a process can be forced to stop at a specified place until it has received a specific signal. For signaling, special variables called semaphores are used. To transmit a signal via semaphore s, a process executes the primitive semSignal(s). To receive a signal via semaphore s, a process executes the primitive semWait(s) ; if the corresponding signal has not yet been transmitted, the process is suspended until the transmission takes place.

There is no way to know before a process decrements a semaphore whether it will block or not There is no way to know which process will continue immediately on a uniprocessor system when two processes are running concurrently You don’t know whether another process is waiting so the number of unblocked processes may be zero or one

Semaphore Primitives

How Semaphore works? To begin, the semaphore has a zero or positive value (e.g., 5). – semWait is called 5 times – 5 processes that can issue a wait and immediately continue to execute. If the value is zero, either by initialization or because a number of processes equal to the initial semaphore value have issued a wait, the next process to issue a wait is blocked, and the semaphore value goes negative. – Each subsequent wait drives the semaphore value further into minus territory. The negative value equals the number of processes waiting to be unblocked. – Each signal unblocks one of the waiting processes (In the queue) when the semaphore value is negative.

Binary Semaphore Primitives

 A queue is used to hold processes waiting on the semaphore the process that has been blocked the longest is released from the queue first (FIFO) Strong Semaphores the order in which processes are removed from the queue is not specified Weak Semaphores

Producer/Consumer Problem General Situation: one or more producers are generating data and placing these in a buffer a single consumer is taking items out of the buffer one at time only one producer or consumer may access the buffer at any one time The Problem: ensure that the producer can’t add data into full buffer and consumer can’t remove data from an empty buffer

Buffer Structure

n = in – out : keep track of the number of items in the buffer, using the integer variable The semaphore s is used to enforce mutual exclusion; The semaphore n # of item

Figure 5.13 A Solution to the Bounded-Buffer Producer/Consumer Problem Using Semaphores

Programming language construct that provides equivalent functionality to that of semaphores and is easier to control Implemented in a number of programming languages including Concurrent Pascal, Pascal-Plus, Modula-2, Modula-3, and Java Has also been implemented as a program library Software module consisting of one or more procedures, an initialization sequence, and local data

Monitor Characteristics Local data variables are accessible only by the monitor’s procedures and not by any external procedure Process enters monitor by invoking one of its procedures Only one process may be executing in the monitor at a time

Achieved by the use of condition variables that are contained within the monitor and accessible only within the monitor Condition variables are operated on by two functions: cwait(c): suspend execution of the calling process on condition c csignal(c): resume execution of some process blocked after a cwait on the same condition

Figure 5.15 Structure of a Monitor

Figure 5.16 A Solution to the Bounded-Buffer Producer/Consumer Problem Using a Monitor

When processes interact with one another two fundamental requirements must be satisfied: Message Passing is one approach to providing both of these functions works with distributed systems and shared memory multiprocessor and uniprocessor systems synchronization to enforce mutual exclusion communication to exchange information

Message Passing The actual function is normally provided in the form of a pair of primitives: send (destination, message) receive (source, message) A process sends information in the form of a message to another process designated by a destination A process receives information by executing the receive primitive, indicating the source and the message

Message Passing Table 5.5 Design Characteristics of Message Systems for Interprocess Communication and Synchronization

Communication of a message between two processes implies synchronization between the two When a receive primitive is executed in a process there are two possibilities: if a message has previously been sent the message is received and execution continues if there is no waiting message the process is blocked until a message arrives or the process continues to execute, abandoning the attempt to receive the receiver cannot receive a message until it has been sent by another process

Both sender and receiver are blocked until the message is delivered Sometimes referred to as a rendezvous Allows for tight synchronization between processes

sender continues on but receiver is blocked until the requested message arrives most useful combination sends one or more messages to a variety of destinations as quickly as possible example -- a service process that exists to provide a service or resource to other processes Nonblocking send, blocking receive neither party is required to wait Nonblocking send, nonblocking receive

 Schemes for specifying processes in send and receive primitives fall into two categories: Direct addressing Indirect addressing

Send primitive includes a specific identifier of the destination process Receive primitive can be handled in one of two ways: require that the process explicitly designate a sending process effective for cooperating concurrent processes implicit addressing source parameter of the receive primitive possesses a value returned when the receive operation has been performed

Messages are sent to a shared data structure consisting of queues that can temporarily hold messages Queues are referred to as mailboxes One process sends a message to the mailbox and the other process picks up the message from the mailbox Allows for greater flexibility in the use of messages

Message Passing Example Message Passing Example Figure 5.21 A Solution to the Bounded-Buffer Producer/Consumer Problem Using Messages

A data area is shared among many processes some processes only read the data area, (readers) and some only write to the data area (writers) Conditions that must be satisfied: 1.any number of readers may simultaneously read the file 2.only one writer at a time may write to the file 3.if a writer is writing to the file, no reader may read it

Readers Have Priority Figure 5.22 A Solution to the Readers/Writers Problem Using Semaphore: Readers Have Priority

Figure 5.23 A Solution to the Readers/Writers Problem Using Semaphore: Writers Have Priority

State of the Process Queues

Message Passing Figure 5.24 A Solution to the Readers/Writers Problem Using Message Passing

Messages Useful for the enforcement of mutual exclusion discipline Operating system themes are: Multiprogramming, multiprocessing, distributed processing Fundamental to these themes is concurrency issues of conflict resolution and cooperation arise Mutual Exclusion Condition in which there is a set of concurrent processes, only one of which is able to access a given resource or perform a given function at any time One approach involves the use of special purpose machine instructions Semaphores Used for signaling among processes and can be readily used to enforce a mutual exclusion discipline