OS Fall’02 Concurrency Operating Systems Fall 2002.

Slides:



Advertisements
Similar presentations
Operating Systems Part III: Process Management (Process Synchronization)
Advertisements

1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Ch 7 B.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Concurrent Programming Problems OS Spring Concurrency pros and cons Concurrency is good for users –One of the reasons for multiprogramming Working.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Lecture 5: Concurrency: Mutual Exclusion and Synchronization.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 13: October 12, 2010 Instructor: Bhuvan Urgaonkar.
6.5 Semaphore Can only be accessed via two indivisible (atomic) operations wait (S) { while S
Synchronization Principles Gordon College Stephen Brinton.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Enforcing Mutual Exclusion, Semaphores. Four different approaches Hardware support Disable interrupts Special instructions Software-defined approaches.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
OS Spring’04 Concurrency Operating Systems Spring 2004.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
Semaphores CSCI 444/544 Operating Systems Fall 2008.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
Synchronization Solutions
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Lecture 5: Concurrency: Mutual Exclusion and Synchronization (cont’d)
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Module 6: Process Synchronization.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Semaphores and Bounded Buffer Andy Wang Operating Systems COP 4610 / CGS 5765.
Concurrency, Mutual Exclusion and Synchronization.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Critical Problem Revisit. Critical Sections Mutual exclusion Only one process can be in the critical section at a time Without mutual exclusion, results.
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Synchronization Background The Critical-Section.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
A. Frank - P. Weisberg Operating Systems Concurrency Linguistic Constructs.
Synchronizing Threads with Semaphores
Chapter 6: Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
Synchronicity Introduction to Operating Systems: Module 5.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Chapter 71 Monitors (7.7)  A high-level-language object-oriented concept that attempts to simplify the programming of synchronization problems  A synchronization.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 6.
Synchronicity II Introduction to Operating Systems: Module 6.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
OS Winter’03 Concurrency. OS Winter’03 Bakery algorithm of Lamport  Critical section algorithm for any n>1  Each time a process is requesting an entry.
Bakery Algorithm - Proof
Semaphores Synchronization tool (provided by the OS) that does not require busy waiting. Logically, a semaphore S is an integer variable that, apart from.
Chapter 5: Process Synchronization
Process Synchronization: Semaphores
Chapter 5: Process Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
Semaphore Originally called P() and V() wait (S) { while S <= 0
Lecture 2 Part 2 Process Synchronization
Critical section problem
Concurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and Synchronization
Chapter 6: Synchronization Tools
Presentation transcript:

OS Fall’02 Concurrency Operating Systems Fall 2002

OS Fall’02 Concurrency pros and cons  Concurrency is good for users One of the reasons for multiprogramming  Working on the same problem, simultaneous execution of programs, background execution  Concurrency is a “ pain in the neck ” for the system Access to shared data structures Deadlock due to resource contention Enabling process interaction

OS Fall’02 Mutual Exclusion  OS is an instance of concurrent programming Multiple activities may take place in the same time  Concurrent execution of operations involving multiple steps is problematic Example: updating linked list  Concurrent access to a shared data structure must be mutually exclusive

OS Fall’02 new->next=current.next current new current new current.next=new insert_after(current,new):

OS Fall’02 tmp=current.next; current.next=current.next.next; free(tmp); current remove_next(current):

OS Fall’02 current new current new current new

OS Fall’02 Atomic operations  A generic solution is to execute an operation atomically All the steps are perceived as executed in a single point of time insert_after(current,new) remove_next(current), or remove_next(current) insert_after(current,new)

OS Fall’02 The Critical Section Model  A code within a critical section must be executed exclusively by a single process do { critical section remainder section } while(1) entry sectionexit section

OS Fall’02 Linked list example do { remainder section } while(1) entry sectionexit section do { remainder section } while(1) entry sectionexit section new->next=current.next current.next=new tmp=current.next; current.next=current.next.next; free(tmp);

OS Fall’02 The Critical Section Problem  n processes P 0, …,P n-1  Processes are communicating through shared atomic read/write variables x is a shared variable, l is a local variable Read: takes the current value: Write: assigns a provided value:

OS Fall’02 Requirements  Mutual Exclusion: If process P i is executing its C.S., then no other process is in its C.S.  Progress: If P i is in its entry section and no process is in C.S., then some process eventually enters C.S.  Fairness: If no process remains in C.S. forever, then each process requesting entry to C.S. will be eventually let into C.S.

OS Fall’02 Solving the CS problem (n=2)

OS Fall’02 Solving the CS problem (n=2)

OS Fall’02 Solving the CS problem (n=2)

OS Fall’02 Peterson ’ s algorithm for n=2

OS Fall’02 Bakery algorithm of Lamport  Critical section algorithm for any n>1  Each time a process is requesting an entry to CS, assign it a ticket which is Unique and monotonically increasing  Let the process into CS in the order of their numbers

OS Fall’02 Choosing a ticket Does not guarantee uniqueness! Use process Ids: Process need to know that somebody perhaps chose a smaller number:

OS Fall’02 Bakery algorithm for n processes

OS Fall’02 Correctness Lemma: Mutual exclusion is immediate from this lemma It is easy to show that Progress and Fairness hold as well (Exercise 47 in the notes :-)

OS Fall’02 Hardware primitives  Elementary building blocks capable of performing certain steps atomically Should be universal to allow for solving versatile synchronization problems  Several such primitives were identified: Test-and-set Fetch-and-add Compare-and-swap

OS Fall’02 Test-and-Set test-and-set(lock) { temp=lock; lock=1; return temp; } Shared int lock, initially 0 do { while(test-and-set(lock)); critical section; lock=0; reminder section; } while(1);

OS Fall’02 Higher level abstractions  Atomic primitives bring convenience  Using hardware primitives make programs non-portable  Higher level software abstractions are represented by Semaphores Monitors

OS Fall’02 Semaphores  Invented by Edsger Dijkstra in 1968  Interface consists of two primitives: P() and V()

OS Fall’02 Notes on the Language  Dutch: P: Proberen, V: Verhogen  Hebrew: P: פחות, V: ועוד  English: P(): wait(), V(): signal()

OS Fall’02 Semaphores: initial value  Initial value of a semaphore indicates how many identical instances of the critical resource exist  A semaphore initialized to 1 is called a mutex (mutual exclusion) P(mutex); critical section V(mutex);

OS Fall’02 Programming with semaphores  Semaphores is a powerful programming abstraction  Define a semaphore for each critical resource E.g., one for each linked list Granularity?  Concurrent processes access appropriate semaphores when synch. is needed

OS Fall’02 Implementing semaphores  All the CS solutions so far imply busy waiting:  Burning CPU cycles while being blocked  Semaphore definition does not necessarily imply busy waiting! Semaphores can be implemented efficiently by the system P() is explicitly telling the system: “ Hey, I cannot proceed, you can preempt me ”

OS Fall’02 Implementing Semaphores  Hence, in fact, a semaphore is a record (structure): type semaphore = record count: integer; queue: list of process end; var S: semaphore;  When a process must wait for a semaphore S, it is blocked and put on the semaphore ’ s queue  The signal operation removes (acc. to a fair policy like FIFO) one process from the queue and puts it in the list of ready processes

OS Fall’02 Semaphore ’ s operations (atomic) P(S): S.count--; if (S.count<0) { place this process in S.queue block this process; } V(S): S.count++; if (S.count <= 0) { remove a process P from S.queue place this process P on ready list } S.count must be initialized to a nonnegative value (depending on application)

OS Fall’02 The producer/consumer problem  A producer process produces information that is consumed by a consumer process  We need a buffer to hold items that are produced and eventually consumed  A common paradigm for cooperating processes

OS Fall’02 P/C: unbounded buffer  We assume first an unbounded buffer consisting of a linear array of elements  in points to the next item to be produced  out points to the next item to be consumed

OS Fall’02 P/C: unbounded buffer (solution)  We need a semaphore S to perform mutual exclusion on the buffer: only 1 process at a time can access the buffer  We need another semaphore N to synchronize producer and consumer on the number N (= in - out) of items in the buffer an item can be consumed only after it has been created

OS Fall’02 Solution of P/C: unbounded buffer Producer: repeat produce v; P(S); append(v); V(S); V(N); forever Consumer: repeat P(N); P(S); w:=take(); V(S); consume(w); forever Initialization: S.count:=1; N.count:=0; in:=out:=0; append(v): b[in]:=v; in++; take(): w:=b[out]; out++; return w;

OS Fall’02 P/C: finite circular buffer of size k  can consume only when number N of (consumable) items is at least 1 (now: N!=in-out)  can produce only when number E of empty spaces is at least 1

OS Fall’02 P/C: finite circular buffer of size k  As before: we need a semaphore S to have mutual exclusion on buffer access we need a semaphore N to synchronize producer and consumer on the number of consumable items  In addition: we need a semaphore E to synchronize producer and consumer on the number of empty spaces

OS Fall’02 Solution of P/C: finite circular buffer of size k Initialization: S.count:=1; in:=0; N.count:=0; out:=0; E.count:=k; Producer: repeat produce v; P(E); P(S); append(v); V(S); V(N); forever Consumer: repeat P(N); P(S); w:=take(); V(S); V(E); consume(w); forever critical sections append(v): b[in]:=v; in:=(in+1) mod k; take(): w:=b[out]; out:=(out+1) mod k; return w;

OS Fall’02 Monitors monitor monitor-name { shared variable declarations procedure P1(…) { … } … procedure Pn() { … }  Only a single process at a time can be active within the monitor => other processes calling Pi() are queued  Conditional variables for finer grained synchronization x.wait() suspend the execution until another process calls x.signal()

OS Fall’02 Monitor  Awaiting processes are either in the entrance queue or in a condition queue  A process puts itself into condition queue cn by issuing cwait(cn)  csignal(cn) brings into the monitor 1 process in condition cn queue  Hence csignal(cn) blocks the calling process and puts it in the urgent queue (unless csignal is the last operation of the monitor procedure)

OS Fall’02 Producer/Consumer problem  Two types of processes: producers consumers  Synchronization is now confined within the monitor  append(.) and take(.) are procedures within the monitor: are the only means by which P/C can access the buffer  If these procedures are correct, synchronization will be correct for all participating processes ProducerI: repeat produce v; Append(v); forever ConsumerI: repeat Take(v); consume v; forever

OS Fall’02 Monitor for the bounded P/C problem  Monitor needs to hold the buffer: buffer: array[0..k-1] of items;  needs two condition variables: notfull: csignal(notfull) indicates that the buffer is not full notemty: csignal(notempty) indicates that the buffer is not empty  needs buffer pointers and counts: nextin: points to next item to be appended nextout: points to next item to be taken count: holds the number of items in buffer

OS Fall’02 Monitor for the bounded P/C problem Monitor boundedbuffer: buffer: array[0..k-1] of items; nextin:=0, nextout:=0, count:=0: integer; notfull, notempty: condition; Append(v): if (count=k) cwait(notfull); buffer[nextin]:= v; nextin:= nextin+1 mod k; count++; csignal(notempty); Take(v): if (count=0) cwait(notempty); v:= buffer[nextout]; nextout:= nextout+1 mod k; count--; csignal(notfull);