CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Operating Systems Part III: Process Management (Process Synchronization)
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Concurrent Programming James Adkison 02/28/2008. What is concurrency? “happens-before relation – A happens before B if A and B belong to the same process.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
CS444/CS544 Operating Systems Introduction to Synchronization 2/07/2007 Prof. Searleman
CSE 451: Operating Systems Spring 2012 Module 7 Synchronization Ed Lazowska Allen Center 570.
Synchronization. Shared Memory Thread Synchronization Threads cooperate in multithreaded environments – User threads and kernel threads – Share resources.
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
CS444/CS544 Operating Systems Synchronization 2/21/2006 Prof. Searleman
CS444/CS544 Operating Systems Synchronization 2/16/2007 Prof. Searleman
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
CS444/CS544 Operating Systems Synchronization 2/21/2007 Prof. Searleman
Chapter 2.3 : Interprocess Communication
02/23/2004CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
1 Concurrency: Deadlock and Starvation Chapter 6.
CS444/CS544 Operating Systems Synchronization 2/14/2007 Prof. Searleman
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Concurrency, Mutual Exclusion and Synchronization.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
CY2003 Computer Systems Lecture 04 Interprocess Communication.
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
COSC 3407: Operating Systems Lecture 9: Readers-Writers and Language Support for Synchronization.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
CSE 153 Design of Operating Systems Winter 2015 Lecture 5: Synchronization.
1 Processes and Threads Part II Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Interprocess Communication Race Conditions
CSE 120 Principles of Operating
Background on the need for Synchronization
Synchronization.
143a discussion session week 3
Lecture 2 Part 2 Process Synchronization
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2004 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2004 Module 6 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 2019
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSE 451: Operating Systems Spring 2008 Module 7 Synchronization
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2009 Module 7 Synchronization
CSE 153 Design of Operating Systems Winter 2019
CSE 451: Operating Systems Spring 2007 Module 7 Synchronization
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman

Outline Synchronization NOTE: Return & discuss Quiz #1 Lab#1 grades in AFS directories Read: Chapter 6 HW#4 posted, due Friday, 2/17/06 HW#4 Exam#1, Wed. 3/01/06, 7 pm, SC162

Concurrency is a good thing So far we have mostly been talking about constructs to enable concurrency Multiple processes, inter-process communication Multiple threads in a process Concurrency critical to using the hardware devices to full capacity Always something that needs to be running on the CPU, using each device, etc. We don’t want to restrict concurrency unless we absolutely have to

Restricting Concurrency When might we *have* to restrict concurrency? Some resource so heavily utilized that no one is getting any benefit from their small piece too many processes wanting to use the CPU (while (1) fork) “thrashing” Solution: Access control (Starvation?) Two processes/threads we would like to execute concurrently are going to access the same data One writing the data while the other is reading; two writing over top at the same time Solution: Synchronization (Deadlock?) Synchronization primitives enable SAFE concurrency

Correctness Two concurrent processes/threads must be able to execute correctly with *any* interleaving of their instructions Scheduling is not under the control of the application writer Note: instructions != line of code in high level programming language If two processes/threads are operating on completely independent data, then no problem If they share data, then application programmer may need to introduce synchronization primitives to safely coordinate their access to the shared data/resources If shared data/resources are read only, then also no problem

Illustrate the problem Suppose we have multiple processes/threads sharing a database of bank account balances. Consider the deposit and withdraw functions int withdraw (int account, int amount) { balance = readBalance(account); balance = balance – amount; updateBalance(account, balance); return balance; } int deposit (int account, int amount) { balance = readBalance(account); balance = balance + amount; updateBalance(account, balance); return balance; } What happens if multiple threads execute these functions for the same account at the “same” time? Notice this is not read-only access

Example Balance starts at $500 and then two processes withdraw $100 at the same time Two people at different ATMs; Update runs on the same back-end computer at the bank What could go wrong? Different Interleavings => Different Final Balances !!! int withdraw(int acct, int amount) { balance = readBalance(acct); balance = balance – amount; updateBalance(acct,balance); return balance; } int withdraw(int acct, int amount) { balance = readBalance(acct); balance = balance – amount; updateBalance(acct,balance); return balance; }

$500 - $100 - $100 = $400 ! If the second does readBalance before the first does writeBalance……. Two examples: Before you get too happy, deposits can be lost just as easily! balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); $500 $400

Race condition When the correct output depends on the scheduling or relative timings of operations, you call that a race condition. Output is non-deterministic To prevent this we need mechanisms for controlling access to shared resources Enforce determinism

Synchronization Required Synchronization required for all shared data structures like Shared databases (like of account balances) Global variables Dynamically allocated structures (off the heap) like queues, lists, trees, etc. OS data structures like the running queue, the process table, … What are not shared data structures? Variables that are local to a procedure (on the stack) Other bad things happen if try to share pointer to a variable that is local to a procedure

Critical Section Problem Model processes/threads as alternating between code that accesses shared data (critical section) and code that does not (remainder section) do { ENTRY SECTION critical section EXIT SECTION remainder section } ENTRY SECTION requests access to shared data ; EXIT SECTION notifies of completion of critical section

Criteria for a Good Solution to the Critical Section Problem Mutual Exclusion Only one process is allowed to be in its critical section at once All other processes forced to wait on entry When one process leaves, another may enter Progress If process is in the critical section, it should not be able to stop another process from entering it indefinitely Decision of who will be next can’t be delayed indefinitely Can’t just give one process access; can’t deny access to everyone Bounded Waiting After a process has made a request to enter its critical section, there should be a bound on the number of times other processes can enter their critical sections

Algorithms: Nice progression of algorithms that violate one of these criteria and then finally get it right in Silberschatz Two process solutions Generalize to multiple process solutions Then expand on mutual exclusion

Synchronization Primitives Synchronization Primitives are used to implement a solution to the critical section problem OS uses HW primitives Disable Interrupts HW Test and set OS exports primitives to user applications; User level can build more complex primitives from simpler OS primitives Locks Semaphores Monitors Messages

Locks Object with two simple operations: lock and unlock Threads use pairs of lock/unlock Lock before entering a critical section Unlock upon exiting a critical section If another thread in their critical section, then lock will not return until the lock can be acquired Between lock and unlock, a thread “holds” the lock

Withdraw revisited int withdraw (int account, int amount) { lock(whichLock(account)); balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); unlock(whichLock(account)); return balance; } ENTER CRITICAL SECTION CRITICAL SECTION EXIT CRITICAL SECTION  What would happen if the programmer  forgot lock? No exclusive access  Forgot unlock? deadlock  put it at the wrong place?  called lock or unlock in both places?  Consider the locking granularity? One lock or one lock per account?  Is it ok for return to be outside the critical section?

$500 - $100 - $100 = $300 Before you get too happy, deposits can be lost just as easily! lock (whichLock(account)); balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); unlock (whichLock(account)); lock (whichLock(account)); balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); unlock (whichLock(account)); BLOCKS! UNTIL GREEN UNLOCKS

Implementing Locks Ok so now we see that all is well *if* we have these objects called locks Next time: implementation

Next time Other synchronization primitives Using synchronization primitives to solve some classic synchronization problems