Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.

Slides:



Advertisements
Similar presentations
Operating Systems Part III: Process Management (Process Synchronization)
Advertisements

Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Lecture 5: Concurrency: Mutual Exclusion and Synchronization.
The Critical-Section Problem
1 Tuesday, June 20, 2006 "The box said that I needed to have Windows 98 or better... so I installed Linux." - LinuxNewbie.org.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
OS Spring’04 Concurrency Operating Systems Spring 2004.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
Synchronization (other solutions …). Announcements Assignment 2 is graded Project 1 is due today.
Synchronization Solutions
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Hardware solutions So far we have looked at software solutions for the critical section problem. –algorithms whose correctness does not rely on any other.
1 Lecture : Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2008.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Module 6: Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
1 Lecture 9: Synchronization  concurrency examples and the need for synchronization  definition of mutual exclusion (MX)  programming solutions for.
The Critical Section Problem
Concurrency, Mutual Exclusion and Synchronization.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion.
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
1 Concurrency: Mutual Exclusion and Synchronization Module 2.2.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
Process Synchronization Concurrent access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the.
1 Concurrent Processes. 2 Cooperating Processes  Operating systems allow for the creation and concurrent execution of multiple processes  concurrency.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization Advanced Operating System Fall 2012.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-5 Process Synchronization Department of Computer Science and Software.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Operating Systems Lecture Notes Synchronization Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Process Synchronization
Chapter 5: Process Synchronization
Chapter 5: Process Synchronization
G.Anuradha Reference: William Stallings
Chapter 5: Process Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
Lecture 22 Syed Mansoor Sarwar
The Critical-Section Problem
Introduction to Cooperating Processes
Module 7a: Classic Synchronization
Lecture 2 Part 2 Process Synchronization
Critical section problem
Grades.
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Process Synchronization
Lecture 21 Syed Mansoor Sarwar
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Presentation transcript:

Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores

Critical section  When a process executes code that manipulates shared data (or resource), we say that the process is in a critical section (CS) for that resource repeat entry section critical section exit section remainder section forever

Three Key Requirements for a Valid Solution to the Critical Section Problem  Mutual Exclusion: At any time, at most one process can be executing critical section (CS) code  Progress: If no process is in its CS and there are one or more processes that wish to enter their CS, it must be possible for those processes to negotiate who will proceed next into CS No deadlock no process in its remainder section can participate in this decision  Bounded Waiting: After a process P has made a request to enter its CS, there is a limit on the number of times that the other processes are allowed to enter their CS, before P’s request is granted Deterministic algorithm, otherwise the process could suffer from starvation

turn := 0; Process P0: repeat while(turn!=0){}; CS turn:=1; RS forever Process P1: repeat while(turn!=1){}; CS turn:=0; RS forever Faulty Algorithm 1 – Turn taking ok for mutual exclusion, but processes MUST strictly alternate turns

flag[0]:=false; Process P0: repeat flag[0]:=true; while(flag[1]){}; CS flag[0]:=false; RS forever flag[1]:=false; Process P1: repeat flag[1]:=true; while(flag[0]){}; CS flag[1]:=false; RS forever Faulty Algorithm 2 - ready flag Mutual exlusion ok, but not progress (interleaving flag[1]:=true and flag[0]:=true means neither can enter CS)

flag[0],flag[1]:=false turn := 0; Process P0: repeat flag[0]:=true; // 0 wants in turn:= 1; // 0 gives a chance to 1 while (flag[1]&turn=1); CS flag[0]:=false; // 0 is done RS forever Process P1: repeat flag[1]:=true; // 1 wants in turn:=0; // 1 gives a chance to 0 while (flag[0]&turn=0); CS flag[1]:=false; // 1 is done RS forever Peterson’s algorithm – proved to be correct Turn can only be 0 or 1 even if both flags are set to true Peterson’s Algorithm

N-Process Solution: Bakery Algorithm  “Take a number for better service...”  Before entering the CS, each P i takes a number.  Holder of smallest number enters CS next..but more than one process can get the same number  If P i and P j receive same number: lowest numbered process is served first  Process resets its number to 0 in the exit section

Bakery Algorithm  Shared data: choosing: array[0..n-1] of boolean;  initialized to false number: array[0..n-1] of integer;  initialized to 0

Bakery Algorithm Process P i : repeat choosing[i]:=true; number[i]:=max(number[0]..number[n-1])+1; choosing[i]:=false; for j:=0 to n-1 do { while (choosing[j]); while (number[j]!=0 and (number[j],j)<(number[i],i)); } CS number[i]:=0; RS forever

Important Observation about Process Interleaving  Even a simple high level language assignment statement can be interleaved One HLL statement A := B; Two Machine instructions: load R1, B store R1, A This is why it is possible to two processes to “take” the same number

Bakery Algorithm: Proof  Mutual Exclusion: If P i is in CS and P k has already chosen its number, then (number[i],i) < (number[k],k)  If they both had their numbers before the decision, this must be true or Pi would not have been chosen  If Pi entered its CS before P k got its number, P k got a bigger number  So P k cannot enter its CS until P i exits.  Progress, Bounded Waiting: Processes enter CS in FCFS order

Drawbacks of Software Solutions  Complicated to program  Busy waiting (wasted CPU cycles)  It would be more efficient to block processes that are waiting (just as if they had requested I/O). This suggests implementing the permission/waiting function in the Operating System  But first, let’s look at some hardware approaches (7.3 Synchronization Hardware):

Hardware Solution 1: Disable Interrupts  On a uniprocessor, mutual exclusion is preserved: while in CS, nothing else can run because preemption impossible  On a multiprocessor: mutual exclusion is not achieved Interrupts are “per-CPU”  Generally not a practical solution for user programs  But could be used inside an OS Process Pi: repeat disable interrupts critical section enable interrupts remainder section forever

Hardware Solution 2: Special Machine Instructions  Normally, the memory system restricts access to any particular memory word to one CPU at a time  Useful extension: machine instructions that perform 2 actions atomically on the same memory location (ex: testing and writing)  The execution of such an instruction is mutually exclusive on that location (even with multiple CPUs)  These instructions can be used to provide mutual exclusion but need more complex algorithms for satisfying the requirements of progress and bounded waiting

The Test-and-Set Instruction  Test-and-Set expressed in “C”:  An algorithm that uses testset for Mutual Exclusion:  Shared variable lock is initialized to 0  Only the first P i who sets lock enters CS int testset(int &i) { int rv; rv = *i; *i = 1; return rv; } Process P i : repeat while(testset(&lock)); CS lock:=0; RS forever Non Interruptible (atomic)! One instruction reads then writes the same memory location

Test-and-Set Instruction  Mutual exclusion is assured: if P i enters CS, the other P j are busy waiting  Satisfies progress requirement  When P i exits CS, the selection of the next P j to enter CS is arbitrary: no bounded waiting (it’s a race!). Starvation is possible. See Fig 7.10 for (complicated) solution  Some processors (ex: Pentium) provide an atomic Swap(a,b) instruction that swaps the content of a and b. (Same drawbacks as Test-and-Set)

Using Swap for Mutual Exclusion  Shared variable lock is initialized to 0  Each P i has a local variable key  The only P i that can enter CS is the one which finds lock=0  This P i excludes all other P j by setting lock to 1 (Same effect as test- and-set) Process P i : repeat key:=1 repeat swap(lock,key) until key=0; CS lock:=0; RS forever

Operating Systems or Programming Language Support for Concurrency  Solutions based on machine instructions such as test and set involve tricky coding.  We can build better solutions by providing synchronization mechanisms in the Operating System or Programming Language (7.4 – Semaphores).  (This leaves the really tricky code to systems programmers)

Semaphores  A Semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually exclusive operations: wait(S)  sometimes called P() Dutch proberen: “to test” signal(S)  sometimes called V() Dutch verhogen: “to increment”

Busy Waiting Semaphores  The simplest way to implement semaphores.  Useful when critical sections last for a short time, or we have lots of CPUs.  S initialized to positive value (to allow someone in at the beginning). wait(S): while S<=0 do ; S--; signal(S): S++;

Atomicity in Semaphores  The test-and-decrement sequence in wait must be atomic, but not the loop.  Signal is atomic.  No two processes can be allowed to execute atomic sections simultaneously.  This can be implemented by other mechanisms (in the OS) test-and-set, or disable interrupts. wait(S): S <= 0 atomic S - - F T

Using semaphores for solving critical section problems  For n processes  Initialize semaphore “mutex” to 1  Then only one process is allowed into CS (mutual exclusion)  To allow k processes into CS at a time, simply initialize mutex to k Process P i : repeat wait(mutex); CS signal(mutex); RS forever

Process P i : repeat wait(mutex); CS signal(mutex); RS forever Process P j : repeat wait(mutex); CS signal(mutex); RS forever Initialize mutex to 1 Semaphores in Action

Synchronizing Processes using Semaphores  Two processes: P 1 and P 2  Statement S 1 in P 1 needs to be performed before statement S 2 in P 2  We want a way to make P 2 wait until P 1 tells it it is OK to proceed  Define a semaphore “synch” Initialize synch to 0  Put this in P 2 : wait(synch); S 2 ;  And this in in P 1 : S 1 ; signal(synch);

Busy-Waiting Semaphores: Observations  When S>0: the number of processes that can execute wait(S) without being blocked = S  When S=0: one or more processes are waiting on S  Semaphore is never negative  When S becomes >0, the first process that tests S enters enters its CS random selection (a race) fails bounded waiting condition