Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Slides:



Advertisements
Similar presentations
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Advertisements

1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.
Concurrency: mutual exclusion and synchronization Slides are mainly taken from «Operating Systems: Internals and Design Principles”, 8/E William Stallings.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Enforcing Mutual Exclusion Message Passing. Peterson’s Algorithm for Processes P0 and P1 void P0() { while( true ) { flag[ 0 ] = false; /* remainder */
Concurrency: Mutual Exclusion and Synchronization Why we need Mutual Exclusion? Classical examples: Bank Transactions:Read Account (A); Compute A = A +
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
1 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago.
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating System Chapter 5. Concurrency: Mutual Exclusion and Synchronization Lynn Choi School of Electrical Engineering.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
Chapter 6: Process Synchronization
Process Synchronization
Chapter 5: Process Synchronization
Process Synchronization: Semaphores
Auburn University COMP 3500 Introduction to Operating Systems Synchronization: Part 4 Classical Synchronization Problems.
Background on the need for Synchronization
Chapter 5: Process Synchronization
G.Anuradha Reference: William Stallings
Chapter 5: Process Synchronization
Chapter 6: Process Synchronization
Chapter 6: Process Synchronization
Chapter 5: Process Synchronization
Inter-Process Communication and Synchronization
Concurrency: Mutual Exclusion and Synchronization
Chapter 5: Process Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
Process Synchronization
Module 7a: Classic Synchronization
Lecture 2 Part 2 Process Synchronization
Semaphores Chapter 6.
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Synchronization Tools
Operating System 5 CONCURRENCY: MUTUAL EXCLUSION AND SYNCHRONIZATION
Presentation transcript:

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings

Cooperating Processes Independent process cannot affect or be affected by the execution of another process. Cooperating process can affect or be affected by the execution of another process Advantages of process cooperation Information sharing Computation speed-up via parallel sub-tasks Modularity by dividing system functions into separate processes Convenience - even an individual may want to edit, print and compile in parallel

Concurrent Process Operating System design is concerned with the management of processes and threads: Multiprogramming multiple processes on a uniprocessor Multiprocessing multiple processes on a multiprocessor Distributed Processing Multiple processes on distributed processors

Principle of Concurrency Interleaving and overlapping can be viewed as examples of concurrent processing both present the same problems In uni-processor, basic characteristics of multiprogramming − the relative speed of execution of processes cannot be predicted depends on activities of other processes the way the OS handles interrupts scheduling policies of the OS Interleave: insert pages, typically blank ones, between the pages using uniprocessor system. Overlapped process: on multi processor system same or different process can be executed at the same time.

Terminologies Synchronization — using atomic (indivisible) operations to ensure cooperation between Processes/threads Mutual exclusion — ensures that only one thread does a particular activity at a time .All other threads are excluded from doing that activity Critical section (region) — code that only one thread can execute at a time (e.g.,code that modifies shared data) Lock — mechanism that prevents another thread from doing something: Lock before entering a critical section Unlock when leaving a critical section

Mutual Exclusion How to implement Mutual Exclusion Implementation??

Mutual Exclusion Lock ( acquire ) Lock ( Release )

Requirements for ME Must be enforced A process that halts must do so without interfering with other processes No deadlock or starvation A process must not be denied access to a critical section when there is no other process using it No assumptions are made about relative process speeds or number of processes A process remains inside its critical section for a finite time only

Implementation of ME Only 2 processes, P1 and P2 General structure of process Pi (other process Pj) do { entry section critical section exit section remainder section } while (1); Processes may share some common variables to synchronize their actions.

Initial Attempt do Process Pi Shared variables: int turn; initially turn = 0 turn - i  Pi can enter its critical section do Process Pi { while (turn != i) ; critical section turn = j; //j == 1-i remainder section } while (1); Satisfies mutual exclusion, but not progress – requires strict alternation of processes If turn == 0 and P1 is ready to enter its critical section, it can not do so even though P0 is in its remainder section 1-process must have alternate chances to execute CS. 2-if process fails in its critical section or even outside the critical section? 2nd attempt: both enter their critical sections…..Worst!

Dekker’s algorithm (1965) Process P2 Boolean flag[1] == flag[2]==false do { flag [2]:= TRUE; while(flag [1] if(turn == 1) { flag[2]=false; While (turn ==1 ) do_nothing; flag[2]=true } critical section flag [2] = FALSE; turn=1; remainder section } while (TRUE); Boolean flag[1] == flag[2]==false Int turn=1; Process P1 do { flag [1]:= TRUE; while (flag [2] ) if(turn == 2) { flag[1]=false; While (turn==2) do_nothing; flag[1]=true } critical section flag [1] = FALSE; turn=2; remainder section } while (TRUE);

Peterson’s algorithm (1981) while (true) { t1_in_crit = true; turn = 2; while (t2_in_crit == true && turn != 1) /* do nothing */ … critical section of code … t1_in_crit = false; … other non-critical code … } t2 ( ) { while (true) { t2_in_crit = true; turn = 1; while (t1_in_crit == true && turn != 2) /* do nothing */ … critical section of code … t2_in_crit = false; … other non-critical code … }

Semaphores – Dijkstra (1965) A variable that has an integer value upon which only three operations are defined: May be initialized to a nonnegative integer value The Wait(); operation decrements the value The Signal(); operation increments the value There are two types of semaphores: Counting Semaphore and Binary Semaphore. Counting Semaphore can be used for mutual exclusion and conditional synchronization. Binary Semaphore is specially designed for mutual exclusion.

Semaphores – Dijkstra (1965) Synchronization tool that does not require busy waiting (spinlock). Easier to generalize to more complex problems When one process modifies the semaphore value, no other process can simultaneously modify that same semaphore value Semaphore S initialized to a non-negative integer value wait operation decrements the semaphore value (P – proberen:to test) If the value becomes negative then the process is blocked signal operation increments the semaphore value (V- verhogen:to increment) If the value is not positive then a process blocked by a wait is unblocked A queue is used to hold the processes waiting on the semaphore and they are removed in a FIFO order Strong semaphore specifies the order in which processes are removed from the queue and guarantees freedom from starvation, a weak semaphore does not specify an order and may be subject to starvation

Semaphore and ME

Deadlock & Starvation P0 P1 Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes – resource acquisition and release Let S and Q be two semaphores initialized to 1 P0 P1 wait(S); wait(Q); wait(Q); wait(S); . . signal(S); signal(Q); signal(Q); signal(S); Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended (LIFO queue)

Producer/Consumer Problem General Situation: one or more producers are generating data and placing these in a buffer a single consumer is taking items out of the buffer one at time only one producer or consumer may access the buffer at one time The Problem: ensure that the producer can’t add data into full buffer and consumer can’t remove data from an empty buffer

Reader writer problem (Readers have priority)

Dining Philosophers 5 philosophers live together, and spend most of their lives thinking and eating(primarily spaghetti) They all eat together at a large table, which is set with 5 plates and 5 forks To eat, a philosopher goes to his or her assigned place, and uses the two forks on either side of the plate to eat spaghetti When a philosopher isn’t eating, he or she is thinking

Dining Philosophers Shared data semaphore chopstick[5]; Problem: Processes are competing for exclusive access to a limited number of resources Philosophers eat/think Eating needs 2 chopsticks Pick up one chopstick at a time How to prevent deadlock Shared data semaphore chopstick[5]; Initially all values are 1

Monitors Programming language construct that provides equivalent functionality to that of semaphores and is easier to control Implemented in a number of programming languages including Concurrent Pascal, Pascal-Plus, Modula-2, Modula-3, and Java Has also been implemented as a program library Software module consisting of one or more procedures, an initialization sequence, and local data

Monitors

Monitors Local data variables are accessible only by the monitor’s procedures and not by any external procedure Process enters monitor by invoking one of its procedures Only one process may be executing in the monitor at a time

Figures: Problem Solution Using a Monitor

Message Passing When processes interact with one another two fundamental requirements must be satisfied: Message Passing is one approach to providing both of these functions works with distributed systems and shared memory multiprocessor and uni-processor systems synchronization to enforce mutual exclusion communication to exchange information

Message Passing The actual function is normally provided in the form of a pair of primitives: send (destination, message) receive (source, message) A process sends information in the form of a message to another process designated by a destination A process receives information by executing the receive primitive, indicating the source and the message

Blocking Send, Blocking Receive Both sender and receiver are blocked until the message is delivered Sometimes referred to as a rendezvous Allows for tight synchronization between processes Thus, both the sender and receiver can be blocking or nonblocking. Three combinations are common, although any particular system will usually have only one or two combinations implemented: • Blocking send, blocking receive: Both the sender and receiver are blocked until the message is delivered; this is sometimes referred to as a rendezvous . This combination allows for tight synchronization between processes.

Nonblocking Send Non-blocking send, non-blocking receive Non-blocking send, blocking receive sender continues on but receiver is blocked until the requested message arrives most useful combination sends one or more messages to a variety of destinations as quickly as possible example -- a service process that exists to provide a service or resource to other processes Non-blocking send, non-blocking receive neither party is required to wait Nonblocking send, blocking receive: Although the sender may continue on, the receiver is blocked until the requested message arrives. This is probably the most useful combination. It allows a process to send one or more messages to a variety of destinations as quickly as possible. A process that must receive a message before it can do useful work needs to be blocked until such a message arrives. An example is a server process that exists to provide a service or resource to other processes. • Nonblocking send, nonblocking receive: Neither party is required to wait. The nonblocking send is more natural for many concurrent programming tasks. For example, if it is used to request an output operation, such as printing, it allows the requesting process to issue the request in the form of a message and then carry on. One potential danger of the nonblocking send is that an error could lead to a situation in which a process repeatedly generates messages. Because there is no blocking to discipline the process, these messages could consume system resources, including processor time and buffer space, to the detriment of other processes and the OS. Also, the nonblocking send places the burden on the programmer to determine that a message has been received: Processes must employ reply messages to acknowledge receipt of a message. For the receive primitive, the blocking version appears to be more natural for many concurrent programming tasks. Generally, a process that requests a message will need the expected information before proceeding. However, if a message is lost, which can happen in a distributed system, or if a process fails before it sends an anticipated message, a receiving process could be blocked indefinitely. This problem can be solved by the use of the nonblocking receive . However, the danger of this approach is that if a message is sent after a process has already executed a matching receive , the message will be lost. Other possible approaches are to allow a process to test whether a message is waiting before issuing a receive and allow a process to specify more than one source in a receive primitive. The latter approach is useful if a process is waiting for messages from more than one source and can proceed if any of these messages arrive.

Message Format

Sending Messages Addressing

Indirect addressing Messages are sent to a shared data structure consisting of queues that can temporarily hold messages Queues are referred to as mailboxes One process sends a message to the mailbox and the other process picks up the message from the mailbox Allows for greater flexibility in the use of messages

Message passing & ME

Summary Operating system themes are: Mutual Exclusion Semaphores Messages Useful for the enforcement of mutual exclusion discipline Operating system themes are: Multiprogramming, multiprocessing, distributed processing Fundamental to these themes is concurrency issues of conflict resolution and cooperation arise Mutual Exclusion Condition in which there is a set of concurrent processes, only one of which is able to access a given resource or perform a given function at any time One approach involves the use of special purpose machine instructions Semaphores Used for signaling among processes and can be readily used to enforce a mutual exclusion discipline Chapter 5 Summary.