1 CSE451 Basic Synchronization Autumn 2002 Gary Kimura Lecture #7 & 8 October 14 & 16, 2002.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
CSE 451: Operating Systems Winter 2012 Synchronization Mark Zbikowski Gary Kimura.
CSE 451: Operating Systems Spring 2012 Module 7 Synchronization Ed Lazowska Allen Center 570.
Synchronization. Shared Memory Thread Synchronization Threads cooperate in multithreaded environments – User threads and kernel threads – Share resources.
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
CS444/CS544 Operating Systems Synchronization 2/21/2006 Prof. Searleman
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Monitors CSCI 444/544 Operating Systems Fall 2008.
Semaphores CSCI 444/544 Operating Systems Fall 2008.
Synchronization April 14, 2000 Instructor Gary Kimura Slides courtesy of Hank Levy.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Lecture 6: Synchronization (II) – Semaphores and Monitors
CSE 451: Operating Systems Winter 2012 Semaphores and Monitors Mark Zbikowski Gary Kimura.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
CSE 451: Operating Systems Winter 2014 Module 8 Semaphores, Condition Variables, and Monitors Mark Zbikowski Allen Center 476 ©
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
IT 344: Operating Systems Module 6 Synchronization Chia-Chi Teng CTB 265.
IT 344: Operating Systems Winter 2008 Module 7 Semaphores and Monitors
CSE 153 Design of Operating Systems Winter 2015 Lecture 5: Synchronization.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 22 Semaphores Classic.
CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CSE 120 Principles of Operating
CSE 120 Principles of Operating
CS703 – Advanced Operating Systems
Chapter 5: Process Synchronization
Synchronization.
Chapter 5: Process Synchronization
CSE451 Basic Synchronization Spring 2001
Synchronization Hank Levy 1.
Synchronization, Monitors, Deadlocks
CSE 451: Operating Systems Winter 2004 Module 7+ Monitor Supplement
CSE 451: Operating Systems Winter Module 8 Semaphores and Monitors
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
CSE 451: Operating Systems Autumn Lecture 7 Semaphores and Monitors
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
CSE 451: Operating Systems Autumn Module 8 Semaphores and Monitors
CSE 451: Operating Systems Winter Module 7 Semaphores and Monitors
CSE 451: Operating Systems Winter Module 7 Semaphores and Monitors
CSE 451: Operating Systems Autumn 2004 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
Synchronization Hank Levy 1.
CSE 451: Operating Systems Winter 2004 Module 6 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
CS333 Intro to Operating Systems
CSE 451: Operating Systems Winter Module 7 Semaphores and Monitors
CSE 451: Operating Systems Spring 2008 Module 7 Synchronization
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2009 Module 7 Synchronization
CSE 451: Operating Systems Spring 2007 Module 7 Synchronization
Presentation transcript:

1 CSE451 Basic Synchronization Autumn 2002 Gary Kimura Lecture #7 & 8 October 14 & 16, 2002

2 Today With all these threads running around sharing the same address space (i.e., memory), how do we keep them from mangling each other? –With specific synchronization primitives The next two lectures will cover “generic” synchronization routines found in many different Operating Systems The third lecture will then cover the mix of synchronization routines found in Windows NT

3 Your job for week #3 Finish and turn in project #1 Readings in Silberschatz –Chapter 7 Homework #3 –Out today Monday October 14, 2002 –Due next Monday October 21, 2002

4 Basic outline for covering synchronization Introduce the issue –Motivation and examples The problem as seen through academic eyes –Critical sections and locks Typical solutions to the problem –Stop Interrupts –Spinlocks –Semaphores –Monitors Windows NT solutions

5 Synchronization (Motivation) Threads cooperate in multithreaded programs –to share resources, access shared data structures e.g., threads accessing a memory cache in a web server –also, to coordinate their execution e.g., a disk reader thread hands off a block to a network writer For correctness, we have to control this cooperation –must assume threads interleave executions arbitrarily and at different rates scheduling is not under application writers’ control –we control cooperation using synchronization enables us to restrict the interleaving of executions Note: this also applies to processes, not just threads –and it also applies across machines in a distributed system

6 Do We Really Need to Synchronize? Yes and No You can build a system and run a long long time before hitting a synchronization bug. And maybe your application or user doesn’t care. (a.k.a. the PC mentality) But for truly robust systems you need to synchronize your data structures to ensure their consistency

7 A simple example Assume a global queue with two fields a flink and a blink Here is some sample code to add to the queue LIST_ENTRY Queue; NewEntry = new(…) NewEntry->flink = Queue.flink; NewEntry->blink = &Queue; NewEntry->flink->blink = NewEntry; NewEntry->blink->flink = NewEntry; Let two threads execute the above code at the same time Where’s the problem? The problem goes all the way down to the machine instructions

8 A very simple example Even simple push and pop stack operations need synchronization Push(s,I) { s->stack[++(s->index)] = I; } Pop(s) { return (s->stack[(s->index)--]); } Even ignoring stack limit tests these routines need synchronization in a multi-threaded environment

9 A classic example (the ATM) Suppose each cash machine transaction is controlled by a separate process, and the withdraw code is: cur_balance=get_balance (acct_ID) withdraw_amt=read_amount_from_ATM() if withdraw_amt>curr_balance then error curr_balance=curr_balance - withdraw_amt put_balance (act_ID,curr_balance) deliver_bucks(withdraw_amt) Now, suppose that you and your s.o. share an account. You each to to separate cash machines and withdraw $100 from your balance of $1000.

10 The ATM example continued you: curr_balance=get_balance(acct_ID) you: withdraw_amt=read_amount() you: curr_balance=curr_balance-withdraw_amt so: curr_balance=get_balance(acct_ID) so: withdraw_amt=read-amount() so: curr_balance=curr_balance-withdraw_amt so: put_balance(acct_ID,curr_balance) so: deliver_bucks(withdraw_amt) you: put_balance(acct_ID,curr_balance) you: deliver_bucks(withdraw_amt) What happens and why? context switch

11 The crux of the matter The problem is that two concurrent threads (or processes) access a shared resource (account) without any synchronization –creates a race condition output is non-deterministic, depends on timing We need mechanisms for controlling access to shared resources in the face of concurrency –so we can reason about the operation of programs essentially, re-introducing determinism Synchronization is necessary for any shared data structure –buffers, queues, lists, hash tables, …

12 When are Resources Shared? Local variables are not shared –refer to data on the stack, each thread has its own stack –never pass/share/store a pointer to a local variable on another thread’s stack Global variables are shared –stored in the static data segment, accessible by any thread Dynamic objects are shared –stored in the heap, shared if you can name it in C, can conjure up the pointer – e.g. void *x = (void *) 0xDEADBEEF in Java, strong typing prevents this –must pass references explicitly

13 Mutual Exclusion (start academic slant) We want to use mutual exclusion to synchronize access to shared resources Code that uses mutual exclusion to synchronize its execution is called a critical section –only one thread at a time can execute in the critical section –all other threads are forced to wait on entry –when a thread leaves a critical section, another can enter

14 Critical Section Requirements Critical sections have the following requirements –mutual exclusion at most one thread is in the critical section –progress if thread T is outside the critical section, then T cannot prevent thread S from entering the critical section –bounded waiting (no starvation) if thread T is waiting on the critical section, then T will eventually enter the critical section –assumes threads eventually leave critical sections –performance the overhead of entering and exiting the critical section is small with respect to the work being done within it

15 Mechanisms for Building Critical Sections Locks –very primitive, minimal semantics; used to build others Semaphores –basic, easy to get the hang of, hard to program with Monitors –high level, requires language support, implicit operations –easy to program with; Java “synchronized()” as example Messages –simple model of communication and synchronization based on (atomic) transfer of data across a channel –direct application to distributed systems

16 Locks A lock is a object (in memory) that provides the following two operations: –acquire( ): a thread calls this before entering a critical section –release( ): a thread calls this after leaving a critical section Threads pair up calls to acquire( ) and release( ) –between acquire( ) and release( ), the thread holds the lock –acquire( ) does not return until the caller holds the lock at most one thread can hold a lock at a time (usually) –so: what can happen if the calls aren’t paired? What about recursive programs that might try and acquire a lock more than once?

17 Possible solutions to the synchronization problem Avoid the problem by having only one thread do everything –Is this really practical? So some typical solutions are –Disable interrupts –Spinlocks –Semaphores –Monitors Kernel mode versus user mode synchronization –We need synchronization in both modes –In the kernel most any trick is available for us to use –In user mode our choices are a bit more limited (why?) –So some synchronization methods are kernel mode only and some can be used in both modes.

18 Disabling Interrupts Can two threads disable interrupts simultaneously? What’s wrong with interrupts? –only available to kernel (why? how can user-level use?) –Lousy for long critical sections –insufficient on a multiprocessor back to atomic instructions Typically, only used to implement higher-level synchronization primitives struct lock { } void acquire(lock) { cli(); // disable interrupts } void release(lock) { sti(); // reenable interupts }

19 Spinlocks How do we implement locks? Here’s one attempt: Why doesn’t this work? –where is the race condition? struct lock { int held = 0; } void acquire(lock) { while (lock->held); lock->held = 1; } void release(lock) { lock->held = 0; } the caller “busy-waits”, or spins for lock to be released, hence spinlock

20 Implementing locks (continued) Problem is that implementation of locks has critical sections, too! –the acquire/release must be atomic atomic == executes as though it could not be interrupted code that executes “all or nothing” Need help from the hardware –atomic instructions test-and-set, compare-and-swap, … –disable/reenable interrupts to prevent context switches

21 Spinlocks redux: Test-and-Set CPU provides the following as one atomic instruction: So, to fix our broken spinlocks, do: bool test_and_set(bool *flag) { bool old = *flag; *flag = True; return old; } struct lock { int held = 0; } void acquire(lock) { while(test_and_set(&lock->held)); } void release(lock) { lock->held = 0; }

22 Problems with spinlocks Horribly wasteful! –if a thread is spinning on a lock, the thread holding the lock cannot make process How did lock holder yield the CPU in the first place? –calls yield( ) or sleep( ) –involuntary context switch Only want spinlocks as primitives to build higher-level synchronization constructs

23 Semaphores semaphore = a synchronization primitive –higher level than locks –invented by Dijkstra in 1968, as part of the THE os A semaphore is: –a variable that is manipulated atomically through two operations, signal and wait –wait(semaphore): decrement, block until semaphore is open also called P(), after Dutch word for test, also called down() –signal(semaphore): increment, allow another to enter also called V(), after Dutch word for increment, also called up()

24 Semaphore Implementations Associated with each semaphore is a count indicating the state of the semaphore 1.> 0 means the semaphore is free or available 2.<= 0 means the semaphore is taken or in use 3.< 0 means there is a thread waiting for the semaphore (its absolute value is the number of waiters) Also associated with each semaphore is a queue of waiting threads. If you execute wait and the semaphore is free, you continue; if not, you block on the waiting queue. A signal unblocks a thread if it’s waiting.

25 Semaphore Operations typedef struct _SEMAPHORE { int Value; List of waiting threads WaitList; } SEMAPHORE, *PSEMAPHORE; VOID Wait( PSEMAHPORE s ) { s->Value = s->Value - 1; if (s->Value < 0) { add this thread to s->WaitList; block current thread; } VOID Signal( PSEMAPHORE s ) { s->Value = s->Value + 1; if (s->Value <= 0) { remove a thread T from s->WaitList; wakeup T; } Signal and Wait must be atomic

26 Semaphore example: Reader/Writer Problem Basic Problem: –An object is shared among several threads, some which only read it, and some which write it. –We can allow multiple readers at a time, but only one writer at a time. –How do we control access to the object to permit this protocol?

27 A Simplistic Reader/Writer Semaphore Solution SEMAPHORE wrt; // control entry to a writer or first reader SEMAPHORE semap; // controls access to readcount int readcount; // number of active readers write process: wait(wrt); // any writers or readers? signal(wrt); // allow others read process: wait(semap); // ensure exclusion readcount = readcount + 1; // one more reader if (readcount = 1) { wait(wrt); } // we’re the first signal(semap); wait(semap); // ensure exclusion readcount = readcount – 1; // one fewer reader if (readcount = 0) { signal(wrt); } // no more readers signal(semap)

28 Reader/Writer Solution Notes Note that: 1.The first reader blocks if there is a writer; any other readers who try to enter will then block on semap. 2.Once a writer exists, all readers will fall through. 3.The last reader to exit signals a waiting writer. 4.When a writer exits, if there is both a reader and writer waiting, which goes next depends on the scheduler.

29 Semaphore Types In general, there are two types of semaphores based on its initial value –A binary semaphore guarantees mutually exclusive access to a resource (only one entry). The binary semaphore is initialized to 1. This is also called a mutex semaphore, but not everything you hear called a mutex is implemented as a semaphore –A counted semaphore represents a resource with many units available (as indicated by the count to which it is initialized). A counted semaphore lets a thread pass as long as more instances are available.

30 Another semaphore example: Bounded Buffer Problem The Problem: There is a buffer shared by producer processes, which insert into it, and consumer processes, which remove from it. The processes are concurrent, so we must control their access to the (shared) variables that describe the state of the buffer.

31 Simple Bounded Buffer Semaphore Solution SEMAPHORE mutex; // mutual exclusion to shared data SEMAPHORE empty = n; // count of empty buffers SEMAPHORE full = 0; // count of full buffers producer: wait(empty); // one fewer buffer, block if none available wait(mutex); // get access to pointers signal(mutex); // done with pointers signal(full); // note one more full buffer consumer: wait(full); // wait until there’s a full buffer wait(mutex); // get access to pointers signal(mutex); // done with pointers signal(empty); // note there’s an empty buffer

Things to Remember About Semaphores A very common synchronization primitive Two main elements a count and a list of waiters Two types counted and binary semaphore Other synchronization operations can be built on top of semaphores

33 Monitors A programming language construct that supports controlled access to shared data –synchronization code added by compiler, enforced at runtime –why does this help? Monitor is a software module that encapsulates: –shared data structures –procedures that operate on the shared data –synchronization between concurrent processes that invoke those procedures Monitor protects the data from unstructured access –guarantees only access data through procedures, hence in legitimate ways

34 A monitor shared data waiting queue of processes trying to enter the monitor operations (procedures) at most one process in monitor at a time

35 Monitor facilities Mutual exclusion –only one process can be executing inside at any time thus, synchronization implicitly associated with monitor –if a second process tries to enter a monitor procedure, it blocks until the first has left the monitor more restrictive than semaphores! but easier to use most of the time Once inside, a process may discover it can’t continue, and may wish to sleep –or, allow some other waiting process to continue –condition variables provided within monitor processes can wait or signal others to continue condition variable can only be accessed from inside monitor

36 Condition Variables A place to wait; sometimes called a rendezvous point Three operations on condition variables –wait(c) release monitor lock, so somebody else can get in wait for somebody else to signal condition thus, condition variables have wait queues –signal(c) wake up at most one waiting process/thread if no waiting processes, signal is lost this is different than semaphores: no history! –broadcast(c) wake up all waiting processes/threads

37 Bounded Buffer using Monitors Monitor bounded_buffer { buffer resources[N]; condition not_full, not_empty; procedure add_entry(resource x) { while(array “resources” is full) wait(not_full); add “x” to array “resources” signal(not_empty); } procedure get_entry(resource *x) { while (array “resources” is empty) wait(not_empty); *x = get resource from array “resources” signal(not_full); }

38 Two Kinds of Monitors Hoare monitors: signal(c) means –run waiter immediately –signaller blocks immediately condition guaranteed to hold when waiter runs but, signaller must restore monitor invariants before signalling! Mesa monitors: signal(c) means –waiter is made ready, but the signaller continues waiter runs when signaller leaves monitor (or waits) condition is not necessarily true when waiter runs again –signaller need not restore invariant until it leaves the monitor –being woken up is only a hint that something has changed must recheck conditional case

39 Examples Hoare monitors –if (notReady) wait(c) Mesa monitors –while(notReady) wait(c) Mesa monitors easier to use –more efficient –fewer switches –directly supports broadcast Hoare monitors leave less to chance –when wake up, condition guaranteed to be what you expect

40 Next Time A look at some of the synchronization routines used in Windows NT