CGS 3763 Operating Systems Concepts Spring 2013

Slides:



Advertisements
Similar presentations
Operating Systems Part III: Process Management (Process Synchronization)
Advertisements

More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
COT 5611 Operating Systems Design Principles Spring 2012 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
COT 5611 Operating Systems Design Principles Spring 2012 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Operating Systems CMPSC 473 Signals, Introduction to mutual exclusion September 28, Lecture 9 Instructor: Bhuvan Urgaonkar.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
COT 4600 Operating Systems Fall 2009
Process Synchronization
Chapter 5: Process Synchronization
Background on the need for Synchronization
G.Anuradha Reference: William Stallings
Chapter 5: Process Synchronization
143a discussion session week 3
COP 4600 Operating Systems Spring 2011
COT 5611 Operating Systems Design Principles Spring 2014
COT 5611 Operating Systems Design Principles Spring 2014
CGS 3763 Operating Systems Concepts Spring 2013
Topic 6 (Textbook - Chapter 5) Process Synchronization
COT 5611 Operating Systems Design Principles Spring 2014
COP 4600 Operating Systems Fall 2010
COP 4600 Operating Systems Spring 2011
Module 7a: Classic Synchronization
COT 5611 Operating Systems Design Principles Spring 2012
Lecture 2 Part 2 Process Synchronization
Critical section problem
Grades.
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Process Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
COP 4600 Operating Systems Fall 2010
Process/Thread Synchronization (Part 2)
CSE 542: Operating Systems
COT 5611 Operating Systems Design Principles Spring 2014
CSE 542: Operating Systems
Chapter 3: Process Management
Presentation transcript:

CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11:30 - 12:30 AM

Lecture 28 – Monday, March 25, 2013 Last time: Today Next time Deadlock detection Wait-for-graphs Semaphores Today Monitors Atomicity Hardware support for atomicity Coordination with a bounded buffer Next time Storage models Reading assignments Chapters 6 and 7 of the textbook Lecture 28

Monitors Semaphores can be used incorrectly multiple threads may be allowed to enter the critical section guarded by the semaphore may cause deadlocks Threads may access the shared data directly without checking the semaphore. Solution  encapsulate shared data with access methods to operate on them. Monitors  an abstract data type that allows access to shared data with specific methods that guarantee mutual exclusion Lecture 28

Lecture 28

Atomic operations Concurrency control requires atomic operations. Atomic operation operation consisting of multiple steps that have to executed without any interruption, all steps must be executed or none of them. All-or-Nothing atomicity  A sequence of steps is an all-or-nothing action if, from the point of view of its invoker, the sequence always either completes, or aborts in such a way that it appears that the sequence had never been undertaken in the first place. That is, it backs out. Before-or-After atomicity  actions whose effect from the point of view of their invokers is the same as if the actions occurred either completely before or completely after one another. Atomicity requires hardware support, special instructions. Lecture 28

Lecture 28

Hardware support for atomicity It is not possible to implement atomic operations without some hardware support. All processors include in the instruction set an instruction for implementing atomic operations: Compare and Swap or Test and Test or Read and Set Memory Lecture 28

Compare-and-swap instruction Compare-and-swap instruction (CMPSWP)   atomic instruction used in multithreading to achieve synchronization. It compares the contents of a memory location to a given value and, only if they are the same, modifies the contents of that memory location to a given new value. This is done as a single atomic operation.  We can use CMPSWP to implement a semaphore as follows: read the value in the memory location; add one to the value use compare-and-swap to write the incremented value back retry if the value read in by the compare-and-swap did not match the value we originally read Since the compare-and-swap occurs (or appears to occur) instantaneously, if another thread updates the location while we are in-progress, the compare-and-swap is guaranteed to fail. Lecture 28

Test and Set Test-and-Set  instruction used to write to a memory location and return its old value as a single atomic (i.e., non-interruptible) operation. If multiple threads may access the same memory, and if a process is currently performing a test-and-set, no other thread may begin another test-and-set until the first one is done. A lock can be implemented using the test-and-set instruction function Lock(boolean *lock) { while (test_and_set(lock) == 1); } Lecture 28

Read and Set Memory- RSM instruction Lecture 28

What if the locking is not atomic? Lecture 28

Thread coordination with a bounded buffer Producer-consumer problem  two threads cooperate – the producer is writing in a buffer and the consumer is reading from the buffer. Basic assumptions: We have only two threads Threads proceed concurrently at independent speeds/rates Bounded buffer – only N buffer cells Messages are of fixed size and occupy only one buffer cell. Lecture 28

Lecture 28

Implicit assumptions for the correctness of the implementation One sending and one receiving thread. Only one thread updates each shared variable. Sender and receiver threads run on different processors to allow spin locks in and out are implemented as integers large enough so that they do not overflow (e.g., 64 bit integers) The shared memory used for the buffer provides read/write coherence The memory provides before-or-after atomicity for the shared variables in and out The result of executing a statement becomes visible to all threads in program order. No compiler optimization supported Lecture 28

Race condition affecting the pointers; both threads A and B increment the pointer “in” (the pointer where the data is written. Lecture 28

Lecture 28

Storage models Cell storage Journal storage Lecture 28

Desirable properties of cell storage Lecture 28

Asynchronous events and signals Signals, or software interrupts, were originally introduced in Unix to notify a process about the occurrence of a particular event in the system. Signals are analogous to hardware I/O interrupts: When a signal arrives, control will abruptly switch to the signal handler. When the handler is finished and returns, control goes back to where it came from After receiving a signal, the receiver reacts to it in a well-defined manner. That is, a process can tell the system (OS) what they want to do when signal arrives: Ignore it. Catch it and deliver it. In this case, it must specify (register) the signal handling procedure. This procedure resides in the user space. The kernel will make a call to this procedure during the signal handling and control returns to kernel after it is done. Kill the process (default for most signals). Examples: Event - child exit, signal - to parent. Control signal from keyboard. Lecture 28

Solutions to thread coordination problems must satisfy a set of conditions Safety: The required condition will never be violated. Liveness: The system should eventually progress irrespective of contention. Freedom From Starvation: No process should be denied progress for ever. That is, every process should make progress in a finite time. Bounded Wait: Every process is assured of not more than a fixed number of overtakes by other processes in the system before it makes progress. Fairness: dependent on the scheduling algorithm • FIFO: No process will ever overtake another process. • LRU: The process which received the service least recently gets the service next. For example for the mutual exclusion problem the solution should guarantee that: Safety  the mutual exclusion property is never violated Liveness  a thread will access the shared resource in a finite time Freedom for starvation  a thread will access the shared resource in a finite time Bounded wait  a thread will access the shared resource at least after a fixed number of accesses by other threads. Lecture 28

Thread coordination problems Dining philosophers Critical section Lecture 28

A solution to critical section problem Applies only to two threads Ti and Tj with i,j ={0,1} which share integer turn  if turn=i then it is the turn of Ti to enter the critical section boolean flag[2]  if flag[i]= TRUE then Ti is ready to enter the critical section To enter the critical section thread Ti sets flag[i]= TRUE sets turn=j If both threads want to enter then turn will end up with a value of either i or j and the corresponding thread will enter the critical section. Ti enters the critical section only if either flag[j]= FALSE or turn=i The solution is correct Mutual exclusion is guaranteed The liveliness is ensured The bounded-waiting is met But this solution may not work as load and store instructions can be interrupted on modern computer architectures Lecture 28

Lecture 28

Signals state and implementation A signal has the following states: Signal send - A process can send signal to one of its group member process (parent, sibling, children, and further descendants). Signal delivered - Signal bit is set. Pending signal - delivered but not yet received (action has not been taken). Signal lost - either ignored or overwritten. Implementation: Each process has a kernel space (created by default) called signal descriptor having bits for each signal. Setting a bit is delivering the signal, and resetting the bit is to indicate that the signal is received. A signal could be blocked/ignored. This requires an additional bit for each signal. Most signals are system controlled signals. Lecture 28

Locks; Before-or-After actions Locks shared variables which acts as a flag to coordinate access to a shared data. Manipulated with two primitives ACQUIRE RELEASE Support implementation of Before-or-After actions; only one thread can acquire the lock, the others have to wait. All threads must obey the convention regarding the locks. The two operations ACQUIRE and RELEASE must be atomic. Hardware support for implementation of locks RSM – Read and Set Memory CMP –Compare and Swap RSM (mem) If mem=LOCKED then RSM returns r=LOCKED and sets mem=LOCKED If mem=UNLOCKED the RSM returns r=LOCKED and sets mem=LOCKED Lecture 28