1 July 2015 Charles Reiss http://cs162.eecs.berkeley.edu/ CS162: Operating Systems and Systems Programming Lecture 7: Synchronization: Lock Implementation,

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Synchronization NOTE to instructors: it is helpful to walk through an example such as readers/writers locks for illustrating the use of condition variables.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
1 Mutual Exclusion: Primitives and Implementation Considerations.
1 Semaphores and Monitors: High-level Synchronization Constructs.
COSC 3407: Operating Systems Lecture 8: Semaphores, Monitors and Condition Variables.
CS 162 Discussion Section Week 3. Who am I? Mosharaf Chowdhury Office 651 Soda 4-5PM.
CS162 Operating Systems and Systems Programming Lecture 7 Mutual Exclusion, Semaphores, Monitors, and Condition Variables September 22, 2008 Prof. John.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
CS162 Operating Systems and Systems Programming Lecture 8 Readers-Writers Language Support for Synchronization February 13, 2006 Prof. Anthony D. Joseph.
CS162 Operating Systems and Systems Programming Lecture 7 Mutual Exclusion, Semaphores, Monitors, and Condition Variables September 21, 2005 Prof. John.
CS162 Operating Systems and Systems Programming Lecture 8 Readers-Writers Language Support for Synchronization September 26, 2005 Prof. John Kubiatowicz.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
CS162 Operating Systems and Systems Programming Lecture 8 Readers-Writers Language Support for Synchronization September 23, 2009 Prof. John Kubiatowicz.
CS162 Operating Systems and Systems Programming Lecture 8 Readers-Writers Language Support for Synchronization September 25, 2006 Prof. John Kubiatowicz.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Computer Science 162 Discussion Section Week 3. Agenda Project 1 released! Locks, Semaphores, and condition variables Producer-consumer – Example (locks,
CS510 Concurrent Systems Introduction to Concurrency.
Semaphores and Bounded Buffer. Semaphores  Semaphore is a type of generalized lock –Defined by Dijkstra in the last 60s –Main synchronization primitives.
COSC 3407: Operating Systems Lecture 7: Implementing Mutual Exclusion.
CS162 Operating Systems and Systems Programming Lecture 5 Semaphores, Conditional Variables February 2, 2011 Ion Stoica
CS162 Operating Systems and Systems Programming Lecture 8 Locks, Semaphores, Monitors, and Quick Intro to Scheduling September 23 rd, 2015 Prof. John Kubiatowicz.
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
CS162 Operating Systems and Systems Programming Lecture 8 Readers-Writers Language Support for Synchronization Friday 11, 2010 Ion Stoica
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
CS162 Section 2. True/False A thread needs to own a semaphore, meaning the thread has called semaphore.P(), before it can call semaphore.V() False: Any.
CS162 Operating Systems and Systems Programming Lecture 4 Synchronization, Atomic operations, Locks, Semaphores September 12, 2011 Anthony D. Joseph and.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
29 June 2015 Charles Reiss CS162: Operating Systems and Systems Programming Lecture 5: Basic Scheduling + Synchronization.
CS703 - Advanced Operating Systems
Outline Introduction Centralized shared-memory architectures (Sec. 5.2) Distributed shared-memory and directory-based coherence (Sec. 5.4) Synchronization:
30 June 2015 Charles Reiss CS162: Operating Systems and Systems Programming Lecture 6: Scheduling: Metrics, Ideals, Heuristics.
CPS110: Reader-writer locks
CS703 – Advanced Operating Systems
Background on the need for Synchronization
Process Synchronization
Atomic Operations in Hardware
Atomic Operations in Hardware
Monitors, Condition Variables, and Readers-Writers
Introduction to Operating Systems
Lecture 13: Producer-Consumer and Semaphores
CS162 Operating Systems and Systems Programming Lecture 7 Mutual Exclusion, Semaphores, Monitors, and Condition Variables September 21, 2009 Prof. John.
CSCI 511 Operating Systems Chapter 5 (Part B) Mutual Exclusion, Semaphores Dr. Frank Li.
January 31, 2011 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 4 Synchronization, Atomic operations,
Anthony D. Joseph and Ion Stoica
CS162 Operating Systems and Systems Programming Lecture 7 Mutual Exclusion, Semaphores, Monitors, and Condition Variables February 8, 2006 Prof. Anthony.
Anthony D. Joseph and John Canny
Implementing Mutual Exclusion
CSCI1600: Embedded and Real Time Software
February 6, 2013 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 5 Semaphores, Conditional Variables.
Implementing Mutual Exclusion
September 12, 2012 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 5 Semaphores, Conditional Variables.
Lecture 13: Producer-Consumer and Semaphores
Chapter 6: Synchronization Tools
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
CSCI1600: Embedded and Real Time Software
CS162 Operating Systems and Systems Programming Lecture 8 Locks, Semaphores, Monitors, and Quick Intro to Scheduling September 21st 2016 Prof. Anthony.
September 19, 2018 Prof. Ion Stoica
Review The Critical Section problem Peterson’s Algorithm
Presentation transcript:

1 July 2015 Charles Reiss http://cs162.eecs.berkeley.edu/ CS162: Operating Systems and Systems Programming Lecture 7: Synchronization: Lock Implementation, Higher-Level Synchronization 1 July 2015 Charles Reiss http://cs162.eecs.berkeley.edu/

Recall: Which scheduler should you use? I care more than anything else about... CPU Throughput: First-Come First-Served Average Response Time: SRTF approximation I/O Throughput: SRTF approximation Fairness – long-term CPU: something like Linux CFS Fairness – wait time for CPU: something like RR Meeting deadlines: Earliest Deadline First Favoring important users: Strict Priority

Recall: Locks Alice Bob Time doHomework() watchTV() MilkLock.Acquire() if (noMilk) { buy milk } MilkLock.Release() Bob doHomework() watchTV() start MilkLock.Acquire() finish MilkLock.Acquire() if (noMilk) { buy milk } MilkLock.Release() Time

Recall: Locks Critical section: only one thread can enter at a time Alice doHomework() watchTV() MilkLock.Acquire() if (noMilk) { buy milk } MilkLock.Release() Bob doHomework() watchTV() start MilkLock.Acquire() finish MilkLock.Acquire() if (noMilk) { buy milk } MilkLock.Release() Critical section: only one thread can enter at a time Time

Recall: Locks Alice doHomework() watchTV() MilkLock.Acquire() if (noMilk) { buy milk } MilkLock.Release() Bob doHomework() watchTV() start MilkLock.Acquire() finish MilkLock.Acquire() if (noMilk) { buy milk } MilkLock.Release() "Holding" lock section doesn't prevent context switches. Just stops other threads from Acquiring it. Time

Recall: Locks Waiting threads don't consume processor time Alice Bob doHomework() watchTV() MilkLock.Acquire() if (noMilk) { buy milk } MilkLock.Release() Bob doHomework() watchTV() start MilkLock.Acquire() finish MilkLock.Acquire() if (noMilk) { buy milk } MilkLock.Release() Waiting threads don't consume processor time Place self on wait queue Time Put back on run queue

Recall: Disabling Interrupts Critical sections in the kernel On a single processor, nothing else can run Primitive critical section! Build other properties of lock on top of this not excluding everything else putting thread to sleep and letting other threads run

Recall: Implementing Locks: Single Core int value = FREE; Acquire() { disable interrupts; if (value == BUSY) { put thread on wait queue; run_new_thread() // Enable interrupts? } else { value = BUSY; } enable interrupts; } Release() { disable interrupts; if (anyone waiting) { take a thread off queue } else { Value = FREE; } enable interrupts; } Idea: disable interrupts for mutual exclusion on accesses to value

Multiprocessors and Locks Disabling interrupts doesn't prevent other processor from doing anything Solution: Hardware support for atomic operations

Recall: Atomic Operations Definition: an operation that runs to completion or not at all Need some to allow threads to work together Example: loading or storing words Some instructions not atomic e.g. double-precision floating point store (many platforms)

Outline Hardware support for locks and spin locks Semaphores Monitors

Outline Hardware support for locks and spin locks Semaphores Monitors

Atomic Read/Modify/Write Recall: atomic load/store not good enough Hardware instructions (or instruction sequences) that atomically read a value from (shared) memory and write a new value Hardware responsible for making work in spite of caches

Read-Modify-Write Instructions test&set (&address) { /* most architectures */ result = M[address]; M[address] = 1; return result; } swap (&address, register) { /* x86 */ temp = M[address]; M[address] = register; register = temp; } compare&swap (&address, reg1, reg2) { /* 68000, x86-64 */ if (reg1 == M[address]) { M[address] = reg2; return success; } else { return failure; } } load-linked&store conditional(&address) { /* MIPS R4000, alpha */ loop: ll r1, M[address]; movi r2, 1; /* Can do arbitrary comp */ sc r2, M[address]; beqz r2, loop; }

Locks with test&set int value = 0; // Free Acquire() { while (test&set(value)) { // do nothing } } Release() { value = 0; } test&set (&address) { result = M[address]; M[address] = 1; return result; } Lock free: test&set reads 0, set value to 1, returns 0 (old value) → acquire finishes Lock not free: test&set reads 1, sets value to 1, returns 1 (old value) → acquire waits

Locks with test&set: Busy waiting int value = 0; // Free Acquire() { while (test&set(value)) { // do nothing } } Release() { value = 0; } test&set (&address) { result = M[address]; M[address] = 1; return result; } Busy-waiting: consumes CPU time while waiting Keeps other threads from using CPU Maybe even the thread holding the lock! These are called "spin locks" (because they spin while busy)

Communicating Between Cores Shared Memory Bus Cache Cache Cache Memory CPU 1 CPU 2 CPU 3

Coherency Cache Cache Cache Memory Addr Val 0x0 10 0xC 40 ... Addr Val 20 ... Addr Val 0x0 10 0x8 30 ... Addr Val 0x0 10 0x4 20 0x8 30 0xC 40 ...

Coherency: Reading Normally Message: What's 0xF0 Message: 0xF0 is 78. Cache Cache Cache Memory Addr Val 0x0 10 0xC 40 ... Addr Val 0x0 10 0x4 20 ... Addr Val 0x0 10 0x8 30 ... Addr Val 0x0 10 0x4 20 0x8 30 0xC 40 ... 0xF0 78

What if CPU 1 wants to write to 0x0? Coherency Cache Cache Cache Memory Addr Val 0x0 10 0xC 40 ... Addr Val 0x0 10 0x4 20 ... Addr Val 0x0 10 0x8 30 ... Addr Val 0x0 10 0x4 20 0x8 30 0xC 40 ... What if CPU 1 wants to write to 0x0?

Coherency: Invalidate Message: Get rid of 0x0. Cache Cache Cache Memory Addr Val 0x0 11 0xC 40 ... Addr Val 0x0 10 0x4 20 ... Addr Val 0x0 10 0x8 30 ... Addr Val 0x0 10 0x4 20 0x8 30 0xC 40 ...

Coherency: Invalidate Cache Cache Cache Memory Addr Val 0x0 11 0xC 40 ... Addr Val 0x0 10 0x4 20 ... Addr Val 0x0 10 0x8 30 ... Addr Val 0x0 10 0x4 20 0x8 30 0xC 40 ... Memory not updated yet – write-back policy.

What if CPU 2 wants to read 0x0 again? Coherency: Snooping Cache Cache Cache Memory Addr Val 0x0 11 0xC 40 ... Addr Val 0x0 10 0x4 20 ... Addr Val 0x0 10 0x8 30 ... Addr Val 0x0 10 0x4 20 0x8 30 0xC 40 ... What if CPU 2 wants to read 0x0 again?

What if CPU 2 wants to read 0x0 again? Coherency: Snooping Message: What's 0x0? Message: 0x0 is 11. Cache Cache Cache Memory Addr Val 0x0 11 0xC 40 ... Addr Val 0x0 11 0x4 20 ... Addr Val 0x0 10 0x8 30 ... Addr Val 0x0 10 0x4 20 0x8 30 0xC 40 ... What if CPU 2 wants to read 0x0 again?

Coherency: States Caches need to remember if anyone else needs a copy of their items Simple protocol – four states: Modified – My value is more recent than memory, and I'm the only one who has it Owned – My value is more recent than memory, but other processors have it Shared – My value is unmodified (from owner or memory) and most recent Invalid – My value is junk; don't use it

Coherency: Invalidate Message: Get rid of 0x0. Cache Cache Cache Memory Addr Val 0x0 11 0xC 40 ... Addr Val 0x0 10 0x4 20 ... Addr Val 0x0 10 0x8 30 ... Addr Val 0x0 10 0x4 20 0x8 30 0xC 40 ... Modified Invalid Invalid

Coherency: Snooping Cache Cache Cache Memory Shared Invalid Owned Message: What's 0x0? Message: 0x0 is 11. Cache Cache Cache Memory Addr Val 0x0 11 0xC 40 ... Addr Val 0x0 11 0x4 20 ... Addr Val 0x0 10 0x8 30 ... Addr Val 0x0 10 0x4 20 0x8 30 0xC 40 ... Invalid Owned Shared

A Note on Memory Traffic test&set requires communication! Local cache needs to "own" the memory address to write to it Means while(test&set(value)); spams the memory bus! Two processors waiting: ownership flips back and forth Hurts the performance of all processors

Test and Test and Set Solution to memory traffic problem: int mylock = 0; // Free Acquire() { do { while(mylock); // Wait until might be free } while(test&set(&mylock)); // exit if get lock } Release() { mylock = 0; } while (mylock) stays in local cache (shared state) waits for cache invalidation from lock holder

Recall: Disable Interrupts int value = FREE; Acquire() { disable interrupts; if (value == BUSY) { put thread on wait queue; run_new_thread() // Enable interrupts? } else { value = BUSY; } enable interrupts; } Release() { disable interrupts; if (anyone waiting) { take a thread off queue } else { Value = FREE; } enable interrupts; } Idea: disable interrupts for mutual exclusion on accesses to value

Outline Hardware support for locks and spin locks Semaphores Monitors

Semaphores Generalized lock Definition: has a non-negative integer value and two operations: P() or down or wait: atomically wait for semaphore to become positive, then decrement it by 1 V() or up or signal: atomically increment semaphore by 1 (waking up a waiting P() thread) P, V from Dutch: proberen (test), verhogen (increment)

Semaphore Like Integers But... Cannot read/write value directly Down (P)/up (V) only Exception: initialization Never negative – something waits instead Two down operations can't go below 0 → some thread "wins"

Simple Semaphore Patterns Mutual exclusion: Same as lock (earlier) "Binary semaphore" Initial value of semaphore = 1 semaphore.P(); // Critical section goes here semaphore.V(); Signaling other threads, e.g. ThreadJoin: Initial value of semaphore = 0 ThreadJoin() { semaphore.P(); } ThreadFinish() { semaphore.V(); }

Intuition for Semaphores What do you need to wait for? Example: critical section to be finished Example: queue to be non-empty Example: array to have space for new items What can you count that will be 0 when you need to wait? Ex: # of threads that can start critical section now Ex: # of items in queue Ex: # of empty spaces in array Then: make sure semaphore operations maintain count

Higher-Level Synchronization Want to make synchronization convenient, correct Semaphores first solution (1963) First-class support for waiting for a condition to become true

Producer/Consumer Correctness Scheduling constraints: Consumer waits for producer if buffer empty Producer waits for consumer if buffer full Mutual exclusion constraint: Only one thread manipulates buffer at a time One semaphore per constraint: Semaphore fullBuffers; // consumer’s constraint Semaphore emptyBuffers;// producer’s constraint Semaphore mutex; // mutual exclusion

Producer/Consumer Code Semaphore fullBuffer = 0; // Initially, buffer empty Semaphore emptyBuffers = numBuffers; // Initially, num empty slots Semaphore mutex = 1; // No one using machine Producer(item) { emptyBuffers.P(); // Wait until space mutex.P(); // Wait until buffer free Enqueue(item); mutex.V(); fullBuffers.V(); // Tell consumers there is more data. } Consumer() { fullBuffers.P(); // Check if there's an item mutex.P(); // Wait until buffer free item = Dequeue(); mutex.V(); emptyBuffers.V(); // tell producer need more return item; }

Producer/Consumer Code Semaphore fullBuffer = 0; // Initially, buffer empty Semaphore emptyBuffers = numBuffers; // Initially, num empty slots Semaphore mutex = 1; // No one using machine Producer(item) { emptyBuffers.P(); // Wait until space mutex.P(); // Wait until buffer free Enqueue(item); mutex.V(); fullBuffers.V(); // Tell consumers there is more data. } Consumer() { fullBuffers.P(); // Check if there's an item mutex.P(); // Wait until buffer free item = Dequeue(); mutex.V(); emptyBuffers.V(); // tell producer need more return item; } fullBuffers always <= number of items on queue emptyBuffers always <= number of free slots on queue

Producer/Consumer Code Semaphore fullBuffer = 0; // Initially, buffer empty Semaphore emptyBuffers = numBuffers; // Initially, num empty slots Semaphore mutex = 1; // No one using machine Producer(item) { emptyBuffers.P(); // Wait until space mutex.P(); // Wait until buffer free Enqueue(item); mutex.V(); fullBuffers.V(); // Tell consumers there is more data. } Consumer() { fullBuffers.P(); // Check if there's an item mutex.P(); // Wait until buffer free item = Dequeue(); mutex.V(); emptyBuffers.V(); // tell producer need more return item; } Can we do: mutex.P() emptyBuffers.P() instead?

Producer/Consumer Code Semaphore fullBuffer = 0; // Initially, buffer empty Semaphore emptyBuffers = numBuffers; // Initially, num empty slots Semaphore mutex = 1; // No one using machine Producer(item) { emptyBuffers.P(); // Wait until space mutex.P(); // Wait until buffer free Enqueue(item); mutex.V(); fullBuffers.V(); // Tell consumers there is more data. } Consumer() { fullBuffers.P(); // Check if there's an item mutex.P(); // Wait until buffer free item = Dequeue(); mutex.V(); emptyBuffers.V(); // tell producer need more return item; } Can we do: mutex.P() emptyBuffers.P() instead? No. Consumer could block on mutex.P() and never V() emptyBuffers Then producer waits forever! Called deadlock (later!)

Producer/Consumer Code Semaphore fullBuffer = 0; // Initially, buffer empty Semaphore emptyBuffers = numBuffers; // Initially, num empty slots Semaphore mutex = 1; // No one using machine Producer(item) { emptyBuffers.P(); // Wait until space mutex.P(); // Wait until buffer free Enqueue(item); mutex.V(); fullBuffers.V(); // Tell consumers there is more data. } Consumer() { fullBuffers.P(); // Check if there's an item mutex.P(); // Wait until buffer free item = Dequeue(); mutex.V(); emptyBuffers.V(); // tell producer need more return item; } Can we do: emptyBuffers.V() mutex.V() instead? Still correct, possibly less efficient.

Producer/Consumer: Discussion Producer: emptyBuffer.P(), fullBuffer.V() Consumer: fullBuffer.P(), emptyBuffer.V() Two consumers or producers? Same code!

Problems with Semaphores Our textbook doesn't like semaphore: "Our view is that programming with locks and condition variables is superior to programming with semaphores." Arguments: Clearer to have separate constructs for waiting for a condition to become true and only allowing one thread to manipulate something at a time Need to make sure some thread calls P() for every V() Other interfaces let you be sloppier (later)

Outline Hardware support for locks and spin locks Semaphores Monitors

Monitors and Condition Variable Locks for mutual exclusion Condition variables for waiting A monitor is a lock and zero or more condition variables with some associated data and operations Java provides this natively POSIX threads: provides locks and condvars, build your own

Condition Variables Condition variable: queue of threads waiting inside a critical section Operations: Wait(&lock): Atomically release lock and go to sleep. Re-acquire lock before returning. Signal(): Wake up one waiter (if there is one) Broadcast(): Wake up all waiters Rule: Hold lock when using condition variable

Monitor Example: Queue Lock lock; Condition dataready; Queue queue; AddToQueue(item) { lock.Acquire(); // Get Lock queue.enqueue(item); // Add item dataready.signal(); // Signal a waiter, if any lock.Release(); // Release Lock } RemoveFromQueue() { lock.Acquire(); // Get Lock while (queue.isEmpty()) { dataready.wait(&lock); // If nothing, sleep } item = queue.dequeue(); // Get next item lock.Release(); // Release Lock return(item); }

Why the loop? (1) while (queue.isEmpty()) { dataready.wait(&lock); // If nothing, sleep } and not: if (queue.isEmpty()) { dataready.wait(&lock); // If nothing, sleep } When a thread is woken up by signal(), it is just put on the ready queue – it might not reacquire the lock immediately! Another thread could "sneak in" and empty the queue Consequence: need a loop

Why the loop? (2) Couldn't we "hand-off" the lock to the signaled thread so nothing can sneak in? Yes. Called Hoare-style monitors. What many textbooks describe Most OSs implement Mesa-style monitors Allow other threads to "sneak in" Much easier to implement Even easier if you allow "spurious wakeups" (returning when nothing signaled in rare cases) POSIX does this

Comparing High-Level Synchronization Semaphores can implement locks: Acquire() { semaphore.P() } Release() { semaphore.V() } Monitors are a superset of locks. Can monitors implement semaphores?

Semaphores with Monitors Lock lock; int Count = initial value of semaphore; CondVar atOne; P() { lock.Acquire(); while (count == 0) { atOne.wait(&lock); } count--; lock.Release() V() { lock.Acquire(); count++; if (count == 1) { atOne.signal(); } lock.Release()

Comparing High-Level Synchronization Semaphores can implement locks: Acquire() { semaphore.P() } Release() { semaphore.V() } Can monitors implement semaphores? Yes. Can semaphores implement monitors?

CVs with Semaphores: Attempt 1 Wait(Lock lock) { lock.Release(); semaphore.P(); lock.Acquire(); } Signal() { semaphore.V(); Problem: Wait() signals non-waiting threads (in the future) What about broadcast()?

CVs with Semaphores: Attempt 2 Attempt 2 to construct CV from semaphore: Wait(Lock lock) { lock.Release(); semaphore.P(); lock.Acquire(); } Signal() { Atomically { if semaphore queue is not empty semaphore.V(); Problem: "is queue empty" isn't a semaphore op There is actually a solution See textbook (for Hoare scheduling)

CVs with Semaphores: Is actually a solution! Trick: queue of semaphores Protected by semaphore acting as lock Each waiting thread has a semaphore Call V() on waiter's semaphore and remove it from queue

Comparing High-Level Synchronization Semaphores can implement locks: Acquire() { semaphore.P() } Release() { semaphore.V() } Monitors are a superset of locks. Can monitors implement semaphores? Yes. Can semaphores implement monitors? Yes.

Summary: Lock Implementation Hardware support for locks: interrupt disabling – suitable for single- processor atomic read/modify/write – spinlocks for multiproc Blocking threads nicely Move to queue for each lock Run new thread atomically (no chance to not wakeup)

Summary: High-Level Sync Semaphores – integers with restricted interface P() / down: Wait until non-zero then decrement V() / up: Increment and wake sleeping task Monitors: Lock + Condition Variables Shared state determines when to wait Condition variables to wait "within critical section" on shared state changing Wait(), Signal(), Broadcast()