1 CS 333 Introduction to Operating Systems Class 5 – Semaphores and Classical Synchronization Problems Jonathan Walpole Computer Science Portland State.

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
COSC 3407: Operating Systems Lecture 8: Semaphores, Monitors and Condition Variables.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Classic Synchronization Problems.
Classic Synchronization Problems
Synchronization Principles Gordon College Stephen Brinton.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
1 OMSE 510: Computing Foundations 6: Multithreading Chris Gilmore Portland State University/OMSE Material Borrowed from Jon Walpole’s lectures.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
1 CS 333 Introduction to Operating Systems Class 5 – Classical IPC Problems Jonathan Walpole Computer Science Portland State University.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Semaphores. Announcements No CS 415 Section this Friday Tom Roeder will hold office hours Homework 2 is due today.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
1 CS 333 Introduction to Operating Systems Class 4 – Synchronization Primitives Semaphores Jonathan Walpole Computer Science Portland State University.
Process Synchronization
Jonathan Walpole Computer Science Portland State University
Jonathan Walpole Computer Science Portland State University
Jonathan Walpole Computer Science Portland State University
Jonathan Walpole Computer Science Portland State University
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Module 6: Process Synchronization.
Computer Science 162 Discussion Section Week 3. Agenda Project 1 released! Locks, Semaphores, and condition variables Producer-consumer – Example (locks,
1 CS 333 Introduction to Operating Systems Class 5 – Semaphores and Classical Synchronization Problems Jonathan Walpole Computer Science Portland State.
More Synchronisation Last time: bounded buffer, readers-writers, dining philosophers Today: sleeping barber, monitors.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
CS510 Concurrent Systems Introduction to Concurrency.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Classical problems.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 14: October 14, 2010 Instructor: Bhuvan Urgaonkar.
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
1 Previous Lecture Review n Concurrently executing threads often share data structures. n If multiple threads are allowed to access shared data structures.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Synchronization.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Synchronisation Examples
CS533 Concepts of Operating Systems Jonathan Walpole.
CSC 360 Instructor: Kui Wu More on Process Synchronization Semaphore, Monitor, Condition Variables.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
6.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 6.5 Semaphore Less complicated than the hardware-based solutions Semaphore S – integer.
Interprocess Communication Race Conditions
Dining Philosophers Five philosophers sit around a table
Chapter 5: Process Synchronization
Jonathan Walpole Computer Science Portland State University
Semaphore Originally called P() and V() wait (S) { while S <= 0
Module 7a: Classic Synchronization
Jonathan Walpole Computer Science Portland State University
CS399 New Beginnings Jonathan Walpole.
Critical section problem
CS510 Operating System Foundations
CS333 Intro to Operating Systems
Oregon Health and Science University
Jonathan Walpole Computer Science Portland State University
Synchronization CSE 2431: Introduction to Operating Systems
Presentation transcript:

1 CS 333 Introduction to Operating Systems Class 5 – Semaphores and Classical Synchronization Problems Jonathan Walpole Computer Science Portland State University

2 Semaphores  An abstract data type that can be used for condition synchronization and mutual exclusion  Condition synchronization  wait until invariant holds before proceeding  signal when invariant holds so others may proceed  Mutual exclusion  only one at a time in a critical section

3 Semaphores  An abstract data type  containing an integer variable (S)  Two operations: Wait (S) and Signal (S)  Alternative names for the two operations  Wait(S) = Down(S) = P(S)  Signal(S) = Up(S) = V(S)

4 Classical Definition of Wait and Signal Wait(S) { while S <= 0 do noop; /* busy wait! */ S = S – 1; /* S >= 0 */ } Signal (S) { S = S + 1; }

5 Problems with classical definition  Waiting threads hold the CPU  Waste of time in single CPU systems  Required preemption to avoid deadlock

6 Blocking implementation of semaphores Semaphore S has a value, S.val, and a thread list, S.list. Wait (S) S.val = S.val - 1 If S.val < 0/* negative value of S.val */ { add calling thread to S.list; /* is # waiting threads */ block;/* sleep */ } Signal (S) S.val = S.val + 1 If S.val <= 0 { remove a thread T from S.list; wakeup (T); }

7 Using semaphores  Semaphores can be used for mutual exclusion  Semaphore value initialized to 1  Wait on entry to critical section  Signal on exit from critical section

8 Using Semaphores for Mutex 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE semaphore mutex = 1-- unlocked Thread A Thread B

9 Using Semaphores for Mutex semaphore mutex = 0-- locked 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE Thread A Thread B

10 Using Semaphores for Mutex semaphore mutex = 0--locked 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE Thread A Thread B

11 Using Semaphores for Mutex semaphore mutex = 0-- locked 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE Thread A Thread B

12 Using Semaphores for Mutex semaphore mutex = 0-- locked 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE Thread A Thread B

13 Using Semaphores for Mutex semaphore mutex = 1-- unlocked This thread can now be released! 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE Thread A Thread B

14 Using Semaphores for Mutex semaphore mutex = 0-- locked 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE 1 repeat 2 wait(mutex); 3 critical section 4 signal(mutex); 5 remainder section 6 until FALSE Thread A Thread B

15 Using semaphores  Semaphores can also be used to count accesses to a resource  Semaphore value is initialized to the number of successive waits that should succeed without blocking

16 Exercise: Implement producer/consumer Global variables semaphore full_buffs = ?; semaphore empty_buffs = ?; char buff[n]; int InP, OutP; 0 thread producer { 1 while(1){ 2 // Produce char c... 3 buf[InP] = c 4 InP = InP + 1 mod n 5 } 6 } 0 thread consumer { 1 while(1){ 2 c = buf[OutP] 3 OutP = OutP + 1 mod n 4 // Consume char... 5 } 6 }

17 Exercise: Implement producer/consumer Global variables semaphore full_buffs = 0; semaphore empty_buffs = n; char buff[n]; int InP, OutP; 0 thread producer { 1 while(1){ 2 // Produce char c... 3 buf[InP] = c 4 InP = InP + 1 mod n 5 } 6 } 0 thread consumer { 1 while(1){ 2 c = buf[OutP] 3 OutP = OutP + 1 mod n 4 // Consume char... 5 } 6 }

18 Counting semaphores in producer/consumer 0 thread producer { 1 while(1){ 2 // Produce char c... 3 wait(empty_buffs) 4 buf[InP] = c 5 InP = InP + 1 mod n 6 signal(full_buffs) 7 } 8 } 0 thread consumer { 1 while(1){ 2 wait(full_buffs) 3 c = buf[OutP] 4 OutP = OutP + 1 mod n 5 signal(empty_buffs) 6 // Consume char... 7 } 8 } Global variables semaphore full_buffs = 0; semaphore empty_buffs = n; char buff[n]; int InP, OutP;

19 Implementing semaphores  Wait () and Signal () are assumed to be atomic How can we ensure that they are atomic?

20 Implementing semaphores  Wait() and Signal() are assumed to be atomic How can we ensure that they are atomic?  Implement Wait() and Signal() as system calls?  how can the kernel ensure Wait() and Signal() are completed atomically?  avoid scheduling another thread when they are in progress?  … but how exactly would you do that?  … and what about semaphores for use in the kernel?

21 Semaphores with interrupt disabling Signal(semaphore sem) DISABLE_INTS sem.val++ if (sem.val <= 0) { th = remove next thread from sem.L wakeup(th) } ENABLE_INTS struct semaphore { int val; list L; } Wait(semaphore sem) DISABLE_INTS sem.val-- if (sem.val < 0){ add thread to sem.L block(thread) } ENABLE_INTS

22 Semaphores with interrupt disabling Signal(semaphore sem) DISABLE_INTS sem.val++ if (sem.val <= 0) { th = remove next thread from sem.L wakeup(th) } ENABLE_INTS struct semaphore { int val; list L; } Wait(semaphore sem) DISABLE_INTS sem.val-- if (sem.val < 0){ add thread to sem.L block(thread) } ENABLE_INTS

23 But what are block() and wakeup()?  If block stops a thread from executing, how, where, and when does it return?  which thread enables interrupts following Wait()?  the thread that called block() shouldn’t return until another thread has called wakeup() !  … but how does that other thread get to run?  … where exactly does the thread switch occur?  Scheduler routines such as block() contain calls to switch() which is called in one thread but returns in a different one!!

24 Thread switch  If thread switch is called with interrupts disabled  where are they enabled?  … and in which thread?

25 Semaphores using atomic instructions  Implementing semaphores with interrupt disabling only works on uni processors  What should we do on a multiprocessor?  As we saw earlier, hardware provides special atomic instructions for synchronization  test and set lock (TSL)  compare and swap (CAS)  etc  Semaphore can be built using atomic instructions 1. build mutex locks from atomic instructions 2. build semaphores from mutex locks

26 Building spinning mutex locks using TSL Mutex_lock: TSL REGISTER,MUTEX | copy mutex to register and set mutex to 1 CMP REGISTER,#0 | was mutex zero? JZE ok | if it was zero, mutex is unlocked, so return JMP mutex_lock | try again Ok:RET | return to caller; enter critical section Mutex_unlock: MOVE MUTEX,#0 | store a 0 in mutex RET | return to caller

27 To block or not to block?  Spin-locks do busy waiting  wastes CPU cycles on uni-processors  Why?  Blocking locks put the thread to sleep  may waste CPU cycles on multi-processors  Why?  … and we need a spin lock to implement blocking on a multiprocessor anyway!

28 Building semaphores using mutex locks Problem: Implement a counting semaphore Up () Down ()...using just Mutex locks

29 How about two “blocking” mutex locks? var cnt: int = 0 -- Signal count var m1: Mutex = unlocked -- Protects access to “cnt” m2: Mutex = locked -- Locked when waiting Down (): Lock(m1) cnt = cnt – 1 if cnt<0 Unlock(m1) Lock(m2) else Unlock(m1) endIf Up(): Lock(m1) cnt = cnt + 1 if cnt<=0 Unlock(m2) endIf Unlock(m1)

30 How about two “blocking” mutex locks? var cnt: int = 0 -- Signal count var m1: Mutex = unlocked -- Protects access to “cnt” m2: Mutex = locked -- Locked when waiting Down (): Lock(m1) cnt = cnt – 1 if cnt<0 Unlock(m1) Lock(m2) else Unlock(m1) endIf Up(): Lock(m1) cnt = cnt + 1 if cnt<=0 Unlock(m2) endIf Unlock(m1) Contains a Race Condition!

31 Oops! How about this then? var cnt: int = 0 -- Signal count var m1: Mutex = unlocked -- Protects access to “cnt” m2: Mutex = locked -- Locked when waiting Down (): Lock(m1) cnt = cnt – 1 if cnt<0 Lock(m2) Unlock(m1) else Unlock(m1) endIf Up(): Lock(m1) cnt = cnt + 1 if cnt<=0 Unlock(m2) endIf Unlock(m1)

32 Oops! How about this then? var cnt: int = 0 -- Signal count var m1: Mutex = unlocked -- Protects access to “cnt” m2: Mutex = locked -- Locked when waiting Down (): Lock(m1) cnt = cnt – 1 if cnt<0 Lock(m2) Unlock(m1) else Unlock(m1) endIf Up(): Lock(m1) cnt = cnt + 1 if cnt<=0 Unlock(m2) endIf Unlock(m1) Contains a Deadlock!

33 Ok! Lets have another try! var cnt: int = 0 -- Signal count var m1: Mutex = unlocked -- Protects access to “cnt” m2: Mutex = locked -- Locked when waiting Down (): Lock(m2) Lock(m1) cnt = cnt – 1 if cnt>0 Unlock(m2) endIf Unlock(m1) … is this solution valid? Up(): Lock(m1) cnt = cnt + 1 if cnt=1 Unlock(m2) endIf Unlock(m1)

34 What about this solution? Mutex m1, m2; // binary semaphores int C = N; // N is # locks int W = 0; // W is # wakeups Down(): Lock(m1); C = C – 1; if (C<0) Unlock(m1); Lock(m2); Lock(m1); W = W – 1; if (W>0) Unlock(m2); endif; else Unlock(m1); endif; Up(): Lock(m1); C = C + 1; if (C<=0) W = W + 1; Unlock(m2); endif; Unlock(m1);

35 Classical Synchronization problems  Producer Consumer (bounded buffer)  Dining philosophers  Sleeping barber  Readers and writers

36 Producer consumer problem  Also known as the bounded buffer problem 8 Buffers InP OutP Consumer Producer Producer and consumer are separate threads

37 Is this a valid solution? thread producer { while(1){ // Produce char c while (count==n) { no_op } buf[InP] = c InP = InP + 1 mod n count++ } thread consumer { while(1){ while (count==0) { no_op } c = buf[OutP] OutP = OutP + 1 mod n count-- // Consume char } n-1 … Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count

38 Does this solution work? 0 thread producer { 1 while(1){ 2 // Produce char c... 3 down(empty_buffs) 4 buf[InP] = c 5 InP = InP + 1 mod n 6 up(full_buffs) 7 } 8 } 0 thread consumer { 1 while(1){ 2 down(full_buffs) 3 c = buf[OutP] 4 OutP = OutP + 1 mod n 5 up(empty_buffs) 6 // Consume char... 7 } 8 } Global variables semaphore full_buffs = 0; semaphore empty_buffs = n; char buff[n]; int InP, OutP;

39 Producer consumer problem  What is the shared state in the last solution?  Does it apply mutual exclusion? If so, how? 8 Buffers InP OutP Consumer Producer Producer and consumer are separate threads

40 Dining philosophers problem  Five philosophers sit at a table  One fork between each philosopher  Why do they need to synchronize?  How should they do it? while(TRUE) { Think(); Grab first fork; Grab second fork; Eat(); Put down first fork; Put down second fork; } Each philosopher is modeled with a thread

41 Is this a valid solution? #define N 5 Philosopher() { while(TRUE) { Think(); take_fork(i); take_fork((i+1)% N); Eat(); put_fork(i); put_fork((i+1)% N); }

42 Working towards a solution … #define N 5 Philosopher() { while(TRUE) { Think(); take_fork(i); take_fork((i+1)% N); Eat(); put_fork(i); put_fork((i+1)% N); } take_forks(i) put_forks(i)

43 Working towards a solution … #define N 5 Philosopher() { while(TRUE) { Think(); take_forks(i); Eat(); put_forks(i); }

44 Picking up forks // only called with mutex set! test(int i) { if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING){ state[i] = EATING; signal(sem[i]); } int state[N] semaphore mutex = 1 semaphore sem[i] take_forks(int i) { wait(mutex); state [i] = HUNGRY; test(i); signal(mutex); wait(sem[i]); }

45 Putting down forks // only called with mutex set! test(int i) { if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING){ state[i] = EATING; signal(sem[i]); } int state[N] semaphore mutex = 1 semaphore sem[i] put_forks(int i) { wait(mutex); state [i] = THINKING; test(LEFT); test(RIGHT); signal(mutex); }

46 Dining philosophers  Is the previous solution correct?  What does it mean for it to be correct?  Is there an easier way?

47 The sleeping barber problem

48 The sleeping barber problem  Barber:  While there are people waiting for a hair cut, put one in the barber chair, and cut their hair  When done, move to the next customer  Else go to sleep, until someone comes in  Customer:  If barber is asleep wake him up for a haircut  If someone is getting a haircut wait for the barber to become free by sitting in a chair  If all chairs are all full, leave the barbershop

49 Designing a solution  How will we model the barber and customers?  What state variables do we need? .. and which ones are shared?  …. and how will we protect them?  How will the barber sleep?  How will the barber wake up?  How will customers wait?  What problems do we need to look out for?

50 Is this a good solution? Barber Thread: while true Wait(customers) Lock(lock) numWaiting = numWaiting-1 Signal(barbers) Unlock(lock) CutHair() endWhile Customer Thread: Lock(lock) if numWaiting < CHAIRS numWaiting = numWaiting+1 Signal(customers) Unlock(lock) Wait(barbers) GetHaircut() else -- give up & go home Unlock(lock) endIf const CHAIRS = 5 var customers: Semaphore barbers: Semaphore lock: Mutex numWaiting: int = 0

51 The readers and writers problem  Multiple readers and writers want to access a database (each one is a thread)  Multiple readers can proceed concurrently  Writers must synchronize with readers and other writers  only one writer at a time !  when someone is writing, there must be no readers ! Goals:  Maximize concurrency.  Prevent starvation.

52 Designing a solution  How will we model the readers and writers?  What state variables do we need? .. and which ones are shared?  …. and how will we protect them?  How will the writers wait?  How will the writers wake up?  How will readers wait?  How will the readers wake up?  What problems do we need to look out for?

53 Is this a valid solution to readers & writers? Reader Thread: while true Lock(mut) rc = rc + 1 if rc == 1 Wait(db) endIf Unlock(mut)... Read shared data... Lock(mut) rc = rc - 1 if rc == 0 Signal(db) endIf Unlock(mut)... Remainder Section... endWhile var mut: Mutex = unlocked db: Semaphore = 1 rc: int = 0 Writer Thread: while true...Remainder Section... Wait(db)...Write shared data... Signal(db) endWhile

54 Readers and writers solution  Does the previous solution have any problems?  is it “fair”?  can any threads be starved? If so, how could this be fixed?  … and how much confidence would you have in your solution?

55 Quiz  What is a race condition?  How can we protect against race conditions?  Can locks be implemented simply by reading and writing to a binary variable in memory?  How can a kernel make synchronization-related system calls atomic on a uniprocessor?  Why wouldn’t this work on a multiprocessor?  Why is it better to block rather than spin on a uniprocessor?  Why is it sometimes better to spin rather than block on a multiprocessor?

56 Quiz  When faced with a concurrent programming problem, what strategy would you follow in designing a solution?  What does all of this have to do with Operating Systems?