1 Previous Lecture Review n Concurrently executing threads often share data structures. n If multiple threads are allowed to access shared data structures.

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
CY2003 Computer Systems Lecture 05 Semaphores - Theory.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 13: October 12, 2010 Instructor: Bhuvan Urgaonkar.
Synchronization Principles Gordon College Stephen Brinton.
Classical Problems of Concurrency
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Synchronization April 14, 2000 Instructor Gary Kimura Slides courtesy of Hank Levy.
1 Chapter 6: Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2007 Chapter 6 of textbook.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Module 6: Process Synchronization.
Computer Science 162 Discussion Section Week 3. Agenda Project 1 released! Locks, Semaphores, and condition variables Producer-consumer – Example (locks,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
Critical Problem Revisit. Critical Sections Mutual exclusion Only one process can be in the critical section at a time Without mutual exclusion, results.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 14: October 14, 2010 Instructor: Bhuvan Urgaonkar.
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Process Synchronization Concurrent access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the.
1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-5 Process Synchronization Department of Computer Science and Software.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
CSE Operating System Principles Synchronization.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 22 Semaphores Classic.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
6.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Synchronization Background The Critical-Section Problem Peterson’s.
6.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 6.5 Semaphore Less complicated than the hardware-based solutions Semaphore S – integer.
CS703 – Advanced Operating Systems
Process Synchronization
Chapter 5: Process Synchronization
Auburn University COMP 3500 Introduction to Operating Systems Synchronization: Part 4 Classical Synchronization Problems.
Background on the need for Synchronization
Chapter 5: Process Synchronization – Part II
Chapter 5: Process Synchronization
Interprocess Communication (3)
Chapter 5: Process Synchronization
Chapter 6: Process Synchronization
Chapter 6: Process Synchronization
Chapter 5: Process Synchronization
Semaphore Originally called P() and V() wait (S) { while S <= 0
Process Synchronization
Synchronization Hank Levy 1.
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
CSE 451: Operating Systems Autumn Lecture 7 Semaphores and Monitors
Synchronization Hank Levy 1.
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
Chapter 6: Synchronization Tools
Process/Thread Synchronization (Part 2)
Synchronization CSE 2431: Introduction to Operating Systems
Presentation transcript:

1 Previous Lecture Review n Concurrently executing threads often share data structures. n If multiple threads are allowed to access shared data structures unhindered race condition may occur n To protect shared data structure from race conditions the thread’s access to it should be mutually exclusive n MX may be implemented in software: u for two threads - Peterson’s algorithm u for multiple threads - bakery algorithm

2 Lecture 10: Semaphores n definition of a semaphore n using semaphores for MX n semaphore solutions for common concurrency problems: u producer/consumer u readers/writers u dining philosophers n implementation of semaphores u using spinlocks u using test-and-set instructions u semaphores without busy waiting n evaluation of semaphores

3 Semaphores — OS Support for MX n semaphores were invented by Dijkstra in 1965, and can be thought of as a generalized locking mechanism. n semaphore supports two atomic operations P / wait and V / signal. The atomicity means that no two P or V operations on the same semaphore can overlap F The semaphore initialized to 1 F Before entering the critical section, a thread calls “P(semaphore)”, or sometimes “wait(semaphore)” F After leaving the critical section, a thread calls “V(semaphore)”, or sometimes “signal(semaphore)”

4 Semaphores — Details n P and V manipulate an integer variable the value of which is originally “1” n before entering the critical section, a thread calls “P(s)” or “wait(s)” u wait (s): F s = s – 1 F if (s < 0) block the thread that called wait(s) on a queue associated with semaphore s F otherwise let the thread that called wait(s) continue into the critical section n after leaving the critical section, a thread calls “V(s)” or “signal(s)” u signal (s): F s = s + 1  if (s  0), then wake up one of the threads that called wait(s), and run it so that it can continue into the critical section n bounded wait condition (not specified originally): if signal is continuously executed each individual blocked process is eventually woken up

5 Using Semaphores for MX t1 () { while (true) { wait(s); /* CS */ signal(s); /* non-CS */ } } t2 () { while (true) { wait(s); /* CS */ signal(s); /* non-CS */ } } The semaphore s is used to protect critical section CS before entering CS a thread executes wait(s) by definition of wait it:  decrements s  checks if s is less than 0; if it is then the thread is blocked. If not then the thread proceeds to CS excluding the other from reaching it after executing CS the thread does signal(s) n by definition of signal it:  increments s  checks if s  0; if it is then the other thread is woken up

6 Semaphore Values n Semaphores (again): wait (s):signal (s): s = s – 1s = s + 1 if (s < 0)if (s  0) block the threadwake up & run one of that called wait(s)the waiting threads otherwise continue into CS n Semaphore values: u Positive semaphore = number of (additional) threads that can be allowed into the critical section u Negative semaphore = number of threads blocked (note — there’s also one in CS) u Binary semaphore has an initial value of 1 u Counting semaphore has an initial value greater than 1

7 “Too much milk” with semaphores Too much milk Thread A Thread B________P(fridge);if (noMilk){ buy milk; buy milk; noMilk=false; noMilk=false; } }V(fridge); u “fridge” is a semaphore initialized to 1, noMilk is a shared variable Execution: After:squeueA B 1 A: P(fridge); 0in CS B: P(fridge); -1 Bin CSwaiting A: V(fridge); 0finishready, in CS B: V(fridge); 1finish

8 Producer/consumer problem bounded buff holds items added by producer and removed by consumer. producer blocks when buffer is full ; consumer – when empty n this variant – single producer, single consumer p - item generated by producer c - item utilized by consumer empty - if open - producer may proceed full - if open - consumer may proceed int p, c, buff[B], front=0, rear=0; semaphore empty(B), full(0), producer() { while (true) { /* produce p */ wait(empty); buff[rear]=p; rear=(rear+1) % B; signal(full); } consumer () { while (true) { wait(full); c=buff[front]; front=(front+1) % B; signal(empty); /* consume c */ }

9 Readers/writers problem n Readers and writers perform operations concurrently on a certain item n writers cannot concurrently access items, readers can readcount - number of readers wishing to access /accessing the item mutex - protects manipulation with readcount wrt - writer can get to item if open n two version of this problem: u readers preference - if reader wants to get to item - writers wait u writers preference - if writer wants to get to item - readers wait n which version is this code? int readcount=0; semaphore wrt(1),mutex(1); writer() { wait(wrt); /* perform write */ signal(wrt); } reader() { wait(mutex); if(readcount==0) wait(wrt); readcount++; signal(mutex); /* perform read */ wait(mutex); readconut--; if(readcount==0) signal(wrt); signal(mutex); }

10 Dining philosophers problem n The problem was first defined and solved by Dijkstra in 1972: five philosophers sit at the table and alternate between thinking and eating from a bowl of spaghetti in the middle of the table. They have five forks. A philosopher needs 2 forks to eat. Picking up and laying down a fork is an atomic operation. Philosophers can talk (share variables) only to their neighbors n Objective: design an algorithm to ensure that any “hungry” philosopher eventually eats n one solution - protect each fork by a semaphore. n what’s wrong with this solution? u there is a possibility of deadlock u fix: make odd philosophers pick even forks first u can we use the bakery algorithm? semaphore fork[5](1); philosopher(int i) { while(true){ wait(fork[i]); wait(fork[(i+1) % 5]); /* eat */ signal(fork[i]); signal(fork[(i+1) % 5)]); /* think */ }

11 Barrier Synchronization Problem n barrier – a designated point in a concurrent program n two thread barrier synchronization – a thread can leave the barrier only when the other thread arrives at it n what’s the code for the second process? semaphore b1(0), b2(0); process1() { /* code before barrier */ signal(b1); wait(b2); /* code after barrier */ } process2() { /* code before barrier */ signal(b2); wait(b1); /* code after barrier */ }

12 Two versions of Semaphores n Semaphores from last time (simplified): wait (s):signal (s): s = s – 1s = s + 1 if (s < 0)if (s  0) block the threadwake up one of that called wait(s)the waiting threads otherwise continue into CS n "Classical" version of semaphores: wait (s):signal (s): if (s  0)if (a thread is waiting) block the threadwake up one of that called wait(s)the waiting threads s = s – 1s = s + 1 continue into CS n Do both work? What is the difference?

13 Implementing semaphores: busy waiting (spinlocks) idea: inside wait continuously check the semaphore variable (spins) until unblocked problems n wait and signal operations are not atomic n wasteful u how will this code behave on uniprocessor/muliprocesor machines? n does not support bounded wait condition wait(semaphore s) { while (s->value <= 0) ; /* do nothing */ s->value--; } signal(semaphore s) { s->value++; }

14 Implementing Semaphores: Spinlocks, Updated problems  can interfere with timer interrupts, how will this affect uniprocessor/multiprocessor execution?  still no bounded wait wait(semaphore s) { /* disable interrupts */ while (s->value <=0) ; /* do nothing */ s->value--; /* enable interrupts */ } signal(semaphore s) { /* disable interrupts */ s->value++; /* enable interrupts */ }

15 Read-modify-write (RMW) Instructions n RMW instructions atomically read a value from memory, modify it, and write the new value to memory u Test&set — on most CPUs u Exchange — Intel x86 — swaps values between register and memory u Compare&swap — Motorola 68xxx — read value, if value matches value in register r1, exchange register r1 and value u Compare,compare&swap - SPARC n RMW is not provided by “pure” RISC processors! bool testnset(bool *i){ if (*i==FALSE) *i=TRUE; return(FALSE); else return(TRUE); }

16 Semaphores using hardware support this is a partial implementation If lock is free ( lock==false ), test&set atomically:  reads false, sets value to true, and returns false  loop test fails, meaning lock is now busy If lock is busy, test&set atomically:  reads true and returns true  loop test is true, so loop continues until someone releases the lock Why is this implementation incomplete? condition is checked inside locked region Adv: ensures atomicity of operation Dis:  does not support bounded wait  there is a problem with excessive use of test&set (more in next lecture) bool lock=false; wait(semaphore s){ while (testnset(lock)){ ; /* do nothing */ while ( s->value <= 0 ) ; /* do nothing */ s->value--; lock=false; } signal(semaphore s){ while(testnset(lock)) ; /* do nothing */ s->value++; lock=false; }

17 Semaphores with Bounded Wait and (Almost) No Busy Waiting (Sleeplocks) wait(s){ lock(s->l); s->value--; if(s->v < 0){ enqueue(s->q,ct); unlock(s->l); block(ct) }else unlock(s->l); } signal(s){ thread *t; lock(s->l); s->value++; if(s->v <= 0) { t=dequeue(s->q); wakeup(t); } unlock(s->l); } variables *ct – pointer to current thread *s – pointer to semaphore value – semaphore value q – queue of blocked threads waiting for semaphore block()- blocks current thread wakeup()- wakes up a thread l - spinlock lock() – locks spinlock unlock() – unlocks spinlock adv: n atomic n no busy waiting, n supports bounded wait dis: n requires context switch (more on that in next lecture)

18 Convoys n Frequent contention for (or coarse grained locking of) semaphores leads to convoys! n a convoy is a situation when a always has to wait on a semaphore n example: file system access is protected by one “big” semaphore n breaking down a semaphore causing convoys into two sequential semaphores seldom helps - pipeline effect

19 Avoiding convoys n Eliminating convoys is done by reprogramming the code so that the access to data protected by convoy causing semaphore can be done in parallel - stack up the pipes! n example: processes doing I/O on different pipes proceed parallel

20 Semaphores - evaluation n semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. u this allows avoid using ad hoc kernel synchronization techniques like non-preemptive kernel u allows to implement in multiprocessors n problems u programming with semaphores is error prone - the code is often cryptic u for signal and wait to be atomic on multiprocessor architecture - a low level locking primitives (like test&set instruction) need to be available u no means of finding out whether the thread will block on semaphore