Synchronization NOTE to instructors: it is helpful to walk through an example such as readers/writers locks for illustrating the use of condition variables.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
Ch 7 B.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
1 Mutual Exclusion: Primitives and Implementation Considerations.
1 Semaphores and Monitors: High-level Synchronization Constructs.
COSC 3407: Operating Systems Lecture 8: Semaphores, Monitors and Condition Variables.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Monitors CSCI 444/544 Operating Systems Fall 2008.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Computer Science 162 Discussion Section Week 3. Agenda Project 1 released! Locks, Semaphores, and condition variables Producer-consumer – Example (locks,
Threads and Critical Sections Vivek Pai / Kai Li Princeton University.
Monitors: An Operating System Structuring Concept
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Concurrency Recitation – 2/24 Nisarg Raval Slides by Prof. Landon Cox.
9/8/2015cse synchronization-p1 © Perkins DW Johnson and University of Washington1 Synchronization Part 1 CSE 410, Spring 2008 Computer Systems.
1 Thread Synchronization: Too Much Milk. 2 Implementing Critical Sections in Software Hard The following example will demonstrate the difficulty of providing.
CS510 Concurrent Systems Introduction to Concurrency.
Semaphores and Bounded Buffer. Semaphores  Semaphore is a type of generalized lock –Defined by Dijkstra in the last 60s –Main synchronization primitives.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Synchronization. Synchronization Motivation When threads concurrently read/write shared memory, program behavior is undefined – Two threads write to the.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
COSC 3407: Operating Systems Lecture 7: Implementing Mutual Exclusion.
11/18/20151 Operating Systems Design (CS 423) Elsa L Gunter 2112 SC, UIUC Based on slides by Roy Campbell, Sam.
Kernel Locking Techniques by Robert Love presented by Scott Price.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
CS 3204 Operating Systems Godmar Back Lecture 7. 12/12/2015CS 3204 Fall Announcements Project 1 due on Sep 29, 11:59pm Reading: –Read carefully.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
COSC 3407: Operating Systems Lecture 9: Readers-Writers and Language Support for Synchronization.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
CS162 Section 2. True/False A thread needs to own a semaphore, meaning the thread has called semaphore.P(), before it can call semaphore.V() False: Any.
CS703 - Advanced Operating Systems
Chapter 6: Process Synchronization
CS703 – Advanced Operating Systems
Background on the need for Synchronization
Monitors, Condition Variables, and Readers-Writers
Chapter 5: Process Synchronization
Introduction to Operating Systems
Critical section problem
Grades.
Implementing Mutual Exclusion
Chapter 6: Process Synchronization
Implementing Mutual Exclusion
Kernel Synchronization II
September 12, 2012 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 5 Semaphores, Conditional Variables.
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Review The Critical Section problem Peterson’s Algorithm
Presentation transcript:

Synchronization NOTE to instructors: it is helpful to walk through an example such as readers/writers locks for illustrating the use of condition variables. I haven’t included it in these slides, as I usually take a class to do that example on the board – showing what happens as multiple threads stop at various points during the execution and other threads run.

Synchronization Motivation Thread 1 p = someFn(); isInitialized = true; Thread 2 while (! isInitialized ) ; q = aFn(p); if q != aFn(someFn()) panic Sadly no. And how can you tell if your compiler might be doing this to you? Or if you wrote the program and set up the Makefile to use the right compiler flags, what is to keep someone else from coming along and seeing the Makefile, and say – hm, I wonder why they haven’t turned on optimization? I need it to go fast, so let’s try that. And it works, so you move on. Only a few months later when it gets out in the real world, it starts crashing! Not just the compiler! Also the hardware! In order for memory to do what you want – sequential, it almost guarantees that it can’t be parallel.

Too Much Milk Example Person A Person B 12:30 Look in fridge. Out of milk. 12:35 Leave for store. 12:40 Arrive at store. 12:45 Buy milk. 12:50 Arrive home, put milk away. 12:55 1:00 Oh no!

Definitions Race condition: output of a concurrent program depends on the order of operations between threads Mutual exclusion: only one thread does a particular thing at a time Critical section: piece of code that only one thread can execute at once Lock: prevent someone from doing something Lock before entering critical section, before accessing shared data unlock when leaving, after done accessing shared data wait if locked (all synch involves waiting!)

Too Much Milk, Try #1 Correctness property Try #1: leave a note Someone buys if needed (liveness) At most one person buys (safety) Try #1: leave a note if !note if !milk { leave note buy milk remove note } What can go wrong? Thread A, Thread B

Too Much Milk, Try #2 Thread A leave note A if (!note B) { if (!milk) buy milk } remove note A Thread B leave note B if (!noteA){ if (!milk) buy milk } remove note B What can go wrong? Starvation

Too Much Milk, Try #3 Thread A leave note A while (note B) // X do nothing; if (!milk) buy milk; remove note A Thread B leave note B if (!noteA){ // Y if (!milk) buy milk } remove note B   At Y: if noNote A, safe for B to buy (means A hasn't started yet) if note A, A is either buying, or waiting for B to quit, so ok for B to quit At X: if nonote B, safe to buy if note B, don't know. A hangs around. Either: if B buys, done if B doesn't buy, A will. Can guarantee at X and Y that either: Safe for me to buy Other will buy, ok to quit

Lessons Solution is complicated “obvious” code often has bugs Modern compilers/architectures reorder instructions Making reasoning even more difficult Generalizing to many threads/processors Peterson’s algorithm: even more complex

Locks lock_acquire lock_release wait until lock is free, then take it lock_release release lock, waking up anyone waiting for it At most one lock holder at a time (safety) If no one holding, acquire gets lock (progress) If all lock holders finish and no higher priority waiters, waiter eventually gets lock (progress)

Too Much Milk, #4 Locks allow concurrent code to be much simpler: lock_acquire() if (!milk) buy milk lock_release() How do we implement locks? (Later) Hardware support for read/modify/write instructions

Lock Example: Malloc/Free char *malloc (n) { lock_acquire(lock); p = allocate memory lock_release(lock); return p; } void free(char *p) { lock_acquire(lock); put p back on free list lock_release(lock); }

Rules for Using Locks Lock is initially free Always acquire before accessing shared data structure Beginning of procedure! Always release after finishing with shared data End of procedure! DO NOT throw lock for someone else to release Never access shared data without lock Danger!

Will this code work? if (p == NULL) { lock_acquire(lock); p = newP(); } release_lock(lock); use p->field1 newP() { p = malloc(sizeof(p)); p->field1 = … p->field2 = … return p; } No! p can be written by hardware/compiler before initialization is complete

Lock example: Bounded Buffer tryget() { item = NULL; lock.acquire(); if (front < last) { item = buf[front % size] front++; } lock.release(); return item; tryput(item) { lock.acquire(); if ((last – front) < size) { buf[last % size] = item; last++; } lock.release(); For simplicity, assume no wraparound on the integers front and last; I’ll assume you can fix that if you want Locks at beginning of procedure; unlock at end; no access outside of locks Note that we don’t know when we once we release the lock whether the buffer is still empty – we only know the state of the buffer while holding the lock! Initially: front = last = 0; lock = FREE; size is buffer capacity

Condition Variables Called only when holding a lock Wait: atomically release lock and relinquish processor until signalled Signal: wake up a waiter, if any Broadcast: wake up all waiters, if any

Example: Bounded Buffer get() { lock.acquire(); while (front == last) empty.wait(lock); item = buf[front % size] front++; full.signal(lock); lock.release(); return item; } put(item) { lock.acquire(); while ((last – front) == size) full.wait(lock); buf[last % size] = item; last++; empty.signal(lock); lock.release(); } For simplicity, assume no wraparound on the integers front and last; I’ll assume you can fix that if you want Locks at beginning of procedure; unlock at end; no access outside of procedure Walk through an example Initially: front = last = 0; size is buffer capacity empty/full are condition variables

Pre/Post Conditions What is state of the bounded buffer at lock acquire? front <= last front + buffer size >= last (also true on return from wait) Also true at lock release! Allows for proof of correctness

Condition Variables ALWAYS hold lock when calling wait, signal, broadcast Condition variable is sync FOR shared state ALWAYS hold lock when accessing shared state Condition variable is memoryless If signal when no one is waiting, no op If wait before signal, waiter wakes up Wait atomically releases lock What if wait, then release? What if release, then wait?

Condition Variables, cont’d When a thread is woken up from wait, it may not run immediately Signal/broadcast put thread on ready list When lock is released, anyone might acquire it Wait MUST be in a loop while (needToWait()) condition.Wait(lock); Simplifies implementation Of condition variables and locks Of code that uses condition variables and locks

Java Manual When waiting upon a Condition, a “spurious wakeup” is permitted to occur, in general, as a concession to the underlying platform semantics. This has little practical impact on most application programs as a Condition should always be waited upon in a loop, testing the state predicate that is being waited for.

Structured Synchronization Identify objects or data structures that can be accessed by multiple threads concurrently In Pintos kernel, everything! Add locks to object/module Grab lock on start to every method/procedure Release lock on finish If need to wait while(needToWait()) condition.Wait(lock); Do not assume when you wake up, signaller just ran If do something that might wake someone up Signal or Broadcast Always leave shared state variables in a consistent state When lock is released, or when waiting

Mesa vs. Hoare semantics Mesa (in textbook, Hansen) Signal puts waiter on ready list Signaller keeps lock and processor Hoare Signal gives processor and lock to waiter When waiter finishes, processor/lock given back to signaller Nested signals possible!

FIFO Bounded Buffer (Hoare semantics) get() { lock.acquire(); if (front == last) empty.wait(lock); item = buf[front % size]; front++; full.signal(lock); lock.release(); return item; } put(item) { lock.acquire(); if ((last – front) == size) full.wait(lock); buf[last % size] = item; last++; empty.signal(lock); // CAREFUL: someone else ran lock.release(); } For simplicity, assume no wraparound on the integers front and last; I’ll assume you can fix that if you want Locks at beginning of procedure; unlock at end; no access outside of procedure Walk through an example Initially: front = last = 0; size is buffer capacity empty/full are condition variables

FIFO Bounded Buffer (Mesa semantics) Create a condition variable for every waiter Queue condition variables (in FIFO order) Signal picks the front of the queue to wake up CAREFUL if spurious wakeups! Easily extends to case where queue is LIFO, priority, priority donation, … With Hoare semantics, not as easy

FIFO Bounded Buffer (Mesa semantics, put() is similar) get() { lock.acquire(); if (front == last) { self = new Condition; nextGet.Append(self); while (front == last) self.wait(lock); nextGet.Remove(self); delete self; } item = buf[front % size] front++; if (!nextPut.empty()) nextPut.first()->signal(lock); lock.release(); return item; } For simplicity, assume no wraparound on the integers front and last; I’ll assume you can fix that if you want Locks at beginning of procedure; unlock at end; no access outside of procedure Walk through an example Initially: front = last = 0; size is buffer capacity nextGet, nextPut are queues of Condition Variables

Implementing Synchronization

Implementing Synchronization Take 1: using memory load/store See too much milk solution/Peterson’s algorithm Take 2: lock.acquire() { disable interrupts } lock.release() { enable interrupts }

Lock Implementation, Uniprocessor LockAcquire(){ disableInterrupts (); if(value == BUSY){ waiting.add(current TCB); suspend(); } else { value = BUSY; } enableInterrupts (); LockRelease() { disableInterrupts (); if (!waiting.Empty()){ thread = waiting.Remove(); readyList.Append(thread); } else { value = FREE; } enableInterrupts (); Note that I throw the lock value Also: enable/disable interrupts is a memory barrier operation – so it forces all memory writes to complete first

Multiprocessor Read-modify-write instructions Examples Atomically read a value from memory, operate on it, and then write it back to memory Intervening instructions prevented in hardware Examples Test and set Intel: xchgb, lock prefix Compare and swap Does it matter which type of RMW instruction we use? Not for implementing locks and condition variables!

Spinlocks Lock where the processor waits in a loop for the lock to become free Assumes lock will be held for a short time Used to protect ready list to implement locks SpinlockAcquire() { while (testAndSet(&lockValue) == BUSY) ; } SpinlockRelease() { lockValue = FREE;

Lock Implementation, Multiprocessor LockAcquire(){ spinLock.Acquire(); disableInterrupts (); if(value == BUSY){ waiting.add(current TCB); suspend(); } else { value = BUSY; } enableInterrupts (); spinLock.Release(); LockRelease() { spinLock.Acquire(); disableInterrupts (); if (!waiting.Empty()){ thread = waiting.Remove(); readyList.Append(thread); } else { value = FREE; } enableInterrupts (); spinLock.Release(); Assumes that there is a single spinlock on all scheduler operations, and the Should I disable interrupts before or after the spinlock Acquire? OK either way! As long as critical section completes quickly.

Lock Implementation, Linux Fast path If lock is FREE, and no one is waiting, test&set Slow path If lock is BUSY or someone is waiting, see previous slide User-level locks Fast path: acquire lock using test&set Slow path: system call to kernel, to use kernel lock

Semaphores Semaphore has a non-negative integer value P() atomically waits for value to become > 0, then decrements V() atomically increments value (waking up waiter if needed) Semaphores are like integers except: Only operations are P and V Operations are atomic If value is 1, two P’s will result in value 0 and one waiter Semaphores are useful for Unlocked wait: interrupt handler, fork/join

Semaphore Bounded Buffer get() { empty.P(); mutex.P(); item = buf[front % size] front++; mutex.V(); full.V(); return item; } put(item) { full.P(); mutex.P(); buf[last % size] = item; last++; mutex.V(); empty.V(); } Why does producer P + V different semaphores than the consumer? Producer creates full buffers; destroys empty buffers!   Is order of P's important? Yes! Deadlock. Is order of V's important? No, except it can affect scheduling efficiency. What if we have 2 producers or 2 consumers? Do we need to change anything? Can we use semaphores for FIFO ordering? Initially: front = last = 0; size is buffer capacity empty/full are semaphores

Implementing Condition Variables using Semaphores (Take 1) wait(lock) { lock.release(); sem.P(); lock.acquire(); } signal() { sem.V(); Condition variables have no history, but semaphores do have history.   What if thread signals and no one is waiting? No op. What if thread later calls wait? Thread waits. What if thread V's and no one is waiting? Increment. What if thread later does P? Decrement and continue. In other words, P + V are commutative -- result is the same no matter what order they occur. Condition variables are not commutative: wait doesn't return until next signal. That's why they must be in a critical section -- need to access state variables to do their job. With monitors, if I signal 15000 times, when no one is waiting, next wait will still go to sleep! But with the above code, next 15000 threads that wait will return immediately!

Implementing Condition Variables using Semaphores (Take 2) wait(lock) { lock.release(); sem.P(); lock.acquire(); } signal() { if semaphore is not empty sem.V(); Does this work? For one, not legal to look at contents of semaphore queue. But also: race condition -- signaller can slip in after lock is released, and before wait. Then waiter never wakes up!   Or: suppose, put thread on separate Condition variable queue. Then release lock, then P. Won't work because a third thread can come in and try to wait, gobbling up the V, so that the original waiter never wakes up! Need to release lock and go to sleep atomically.

Implementing Condition Variables using Semaphores (Take 3) wait(lock) { sem = new Semaphore; queue.Append(sem); // queue of waiting threads lock.release(); sem.P(); lock.acquire(); } signal() { if !queue.Empty() sem = queue.Remove(); sem.V(); // wake up waiter Does this work? For one, not legal to look at contents of semaphore queue. But also: race condition -- signaller can slip in after lock is released, and before wait. Then waiter never wakes up!   Or: suppose, put thread on separate Condition variable queue. Then release lock, then P. Won't work because a third thread can come in and try to wait, gobbling up the V, so that the original waiter never wakes up! Need to release lock and go to sleep atomically.

Synchronization Summary Use consistent structure Always use locks and condition variables Always acquire lock at beginning of procedure, release at end Always hold lock when using a condition variable Always wait in while loop Never spin in sleep()