Introduction to Operating Systems

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Synchronization NOTE to instructors: it is helpful to walk through an example such as readers/writers locks for illustrating the use of condition variables.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
9/8/2015cse synchronization-p1 © Perkins DW Johnson and University of Washington1 Synchronization Part 1 CSE 410, Spring 2008 Computer Systems.
CS510 Concurrent Systems Introduction to Concurrency.
Cosc 4740 Chapter 6, Part 3 Process Synchronization.
Semaphores and Bounded Buffer. Semaphores  Semaphore is a type of generalized lock –Defined by Dijkstra in the last 60s –Main synchronization primitives.
Synchronization. Synchronization Motivation When threads concurrently read/write shared memory, program behavior is undefined – Two threads write to the.
COSC 3407: Operating Systems Lecture 7: Implementing Mutual Exclusion.
Kernel Locking Techniques by Robert Love presented by Scott Price.
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
COSC 3407: Operating Systems Lecture 9: Readers-Writers and Language Support for Synchronization.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
CS162 Section 2. True/False A thread needs to own a semaphore, meaning the thread has called semaphore.P(), before it can call semaphore.V() False: Any.
CS703 - Advanced Operating Systems
PROCESS MANAGEMENT IN MACH
CS703 – Advanced Operating Systems
Background on the need for Synchronization
1 July 2015 Charles Reiss CS162: Operating Systems and Systems Programming Lecture 7: Synchronization: Lock Implementation,
CS533 Concepts of Operating Systems Class 3
Monitors, Condition Variables, and Readers-Writers
Chapter 5: Process Synchronization
CSCI 511 Operating Systems Chapter 5 (Part C) Monitor
Introduction to Operating Systems
CSCI 511 Operating Systems Chapter 5 (Part B) Mutual Exclusion, Semaphores Dr. Frank Li.
Sarah Diesburg Operating Systems COP 4610
COP 4600 Operating Systems Spring 2011
Topic 6 (Textbook - Chapter 5) Process Synchronization
Anthony D. Joseph and Ion Stoica
Module 7a: Classic Synchronization
COT 5611 Operating Systems Design Principles Spring 2012
CS703 - Advanced Operating Systems
Anthony D. Joseph and John Canny
Critical section problem
Lecture 4: Synchronization
Implementing Mutual Exclusion
CSCI1600: Embedded and Real Time Software
CS533 Concepts of Operating Systems Class 3
February 6, 2013 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 5 Semaphores, Conditional Variables.
Implementing Mutual Exclusion
September 12, 2012 Ion Stoica CS162 Operating Systems and Systems Programming Lecture 5 Semaphores, Conditional Variables.
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSCI1600: Embedded and Real Time Software
Don Porter Portions courtesy Emmett Witchel
September 19, 2018 Prof. Ion Stoica
Review The Critical Section problem Peterson’s Algorithm
Presentation transcript:

Introduction to Operating Systems CPSC/ECE 3220 Summer 2018 Lecture Notes OSPP Chapter 5 – Part B (adapted by Mark Smotherman from Tom Anderson’s slides on OSPP web site)

Roadmap

Implementing Synchronization Take 1: using memory load/store See too much milk solution/Peterson’s algorithm Take 2: Lock::acquire() { disable interrupts } Lock::release() { enable interrupts }

Lock Implementation, Uniprocessor Lock::acquire() { disableInterrupts(); if (value == BUSY) { waiting.add(myTCB); myTCB->state = WAITING; next = readyList.remove(); switch(myTCB, next); myTCB->state = RUNNING; } else { value = BUSY; } enableInterrupts(); Lock::release() { disableInterrupts(); if (!waiting.Empty()) { next = waiting.remove(); next->state = READY; readyList.add(next); } else { value = FREE; } enableInterrupts();

Multiprocessor Read-modify-write instructions Examples Atomically read a value from memory, operate on it, and then write it back to memory Intervening instructions prevented in hardware Examples Test and Set Exchange (Intel: xchgb, w/ lock prefix to make atomic) Compare and Swap Any of these can be used for implementing locks and condition variables!

Spinlocks A spinlock is a lock where the processor waits in a loop for the lock to become free Assumes lock will be held for a short time Used to protect the CPU scheduler and to implement locks Spinlock::acquire() { while ( TestAndSet(&lockValue) == BUSY ) ; } Spinlock::release() { lockValue = FREE; memorybarrier();

How many spinlocks? Various data structures One spinlock per kernel? Queue of waiting threads on lock X Queue of waiting threads on lock Y List of threads ready to run One spinlock per kernel? Bottleneck! Instead: One spinlock per lock One spinlock for the scheduler ready list Perhaps per-core ready lists: one spinlock per core

Lock Implementation, Multiprocessor Lock::acquire() { disableInterrupts(); spinLock.acquire(); if (value == BUSY) { waiting.add(myTCB); suspend(&spinlock); } else { value = BUSY; } spinLock.release(); enableInterrupts(); Lock::release() { disableInterrupts(); spinLock.acquire(); if (!waiting.Empty()) { next = waiting.remove(); scheduler->makeReady(next); } else { value = FREE; } spinLock.release(); enableInterrupts(); Assumes that suspend releases the spinlock once its safe to do so. Also, note the scheduler protected by a different spinlock. MyTCB is the macro for the previous slide – whatever machine-dependent way to find the current TCB.

Lock Implementation, Multiprocessor Sched::suspend(SpinLock ∗lock) { TCB ∗next; disableInterrupts(); schedSpinLock.acquire(); lock−>release(); myTCB−>state = WAITING; next = readyList.remove(); thread_switch(myTCB, next); myTCB−>state = RUNNING; schedSpinLock.release(); enableInterrupts(); } Sched::makeReady(TCB ∗thread) { disableInterrupts (); schedSpinLock.acquire(); readyList.add(thread); thread−>state = READY; schedSpinLock.release(); enableInterrupts(); }

What thread is currently running? Thread scheduler needs to find the TCB of the currently running thread To suspend and switch to a new thread To check if the current thread holds a lock before acquiring or releasing it On a uniprocessor, easy: just use a global On a multiprocessor, various methods: Compiler dedicates a register (e.g., r31 points to TCB running on the this CPU; each CPU has its own r31) If hardware has a special per-processor register, use it Fixed-size stacks: put a pointer to the TCB at the bottom of its stack Find it by masking the current stack pointer

Lock Implementation, Linux Most locks are free most of the time Why? Linux implementation takes advantage of this fact Fast path If lock is FREE, and no one is waiting, two instructions to acquire the lock If no one is waiting, two instructions to release the lock Slow path If lock is BUSY or someone is waiting, use multiprocessor impl. User-level locks Fast path: acquire lock using test&set Slow path: system call to kernel, use kernel lock

Lock Implementation, Linux struct mutex { /∗ 1: unlocked ; 0: locked; negative : locked, possible waiters ∗/ atomic_t count; spinlock_t wait_lock; struct list_head wait_list; }; // atomic decrement // %eax is pointer to count lock decl (%eax) jns 1f // jump if not signed // (if value is now 0) call slowpath_acquire 1:

Semaphores Semaphore has a non-negative integer value P() atomically waits for value to become > 0, then decrements V() atomically increments value (waking up waiter if needed) Semaphores are like integers except: Only operations are P and V Operations are atomic If value is 1, two P’s will result in value 0 and one waiter Semaphores are useful for Unlocked wait: interrupt handler, fork/join

Bounded Buffer with Locks/CVs get() { lock.acquire(); while (front == tail) { empty.wait(&lock); } item = buf[front % MAX]; front++; full.signal(&lock); lock.release(); return item; put(item) { lock.acquire(); while ((tail – front) == MAX) { full.wait(&lock); } buf[tail % MAX] = item; tail++; empty.signal(&lock); lock.release(); For simplicity, assume no wraparound on the integers front and last; I’ll assume you can fix that if you want Locks at beginning of procedure; unlock at end; no access outside of procedure Walk through an example Initially: front = tail = 0; MAX is buffer capacity 2 CVs: empty and full

Bounded Buffer with Semaphores get() { fullSlots.P(); mutex.P(); item = buf[front % MAX]; front++; mutex.V(); emptySlots.V(); return item; } put(item) { emptySlots.P(); mutex.P(); buf[last % MAX] = item; last++; mutex.V(); fullSlots.V(); } Why does producer P + V different semaphores than the consumer? Producer creates full buffers; destroys empty buffers!   Is order of P's important? Yes! Deadlock. Is order of V's important? No, except it can affect scheduling efficiency. What if we have 2 producers or 2 consumers? Do we need to change anything? Can we use semaphores for FIFO ordering? Initially: front = last = 0; MAX is buffer capacity 3 semaphores: mutex = 1; emptySlots = MAX; fullSlots = 0;

Compare Semaphore P()/V() Implementation with Locks/CVs Semaphore::P() { disableInterrupts(); spinLock.acquire(); if (value == 0) { waiting.add(myTCB); suspend(&spinlock); } else { value--; } spinLock.release(); enableInterrupts(); Semaphore::V() { disableInterrupts(); spinLock.acquire(); if (!waiting.Empty()) { next = waiting.remove(); scheduler->makeReady(next); } else { value++; } spinLock.release(); enableInterrupts(); Assumes that suspend releases the spinlock once its safe to do so. Also, note the scheduler protected by a different spinlock. MyTCB is the macro for the previous slide – whatever machine-dependent way to find the current TCB.

Remember the rules Use consistent structure Always use locks and condition variables Always acquire lock at beginning of procedure, release at end Always hold lock when using a condition variable Always wait in while loop Never spin in sleep()

(if time permits)

Communicating Sequential Processes (CSP/Google Go) A thread per shared object Only thread allowed to touch object’s data To call a method on the object, send thread a message with method name, arguments Thread waits in a loop, get msg, do operation No memory races!

Locks/CVs vs. CSP Create a lock on shared data = create a single thread to operate on data Call a method on a shared object = send a message/wait for reply Wait for a condition = queue an operation that can’t be completed just yet Signal a condition = perform a queued operation, now enabled

|| is concurrent execution [ bounded_buffer || producer || consumer ] producer :: *[ <produce item> ; bounded_buffer ! item ] consumer :: *[ bounded_buffer ? item; <consume item> bounded_buffer :: buffer: (0..9) item; count, in, out: integer; count := 0; in := 0; out := 0; *[ count < 10 & producer ? buffer(in) -> in := (in + 1) mod 10; count := count + 1 || count > 0 & consumer ! buffer(out) -> out := (out + 1) mod 10; count := count - 1 *[ ] is repetition ! is send ? is receive -> marks a guarded statement

Implementing Condition Variables using Semaphores (Take 1) wait(lock) { lock.release(); semaphore.P(); lock.acquire(); } signal() { semaphore.V(); Condition variables have no history, but semaphores do have history.   What if thread signals and no one is waiting? No op. What if thread later calls wait? Thread waits. What if thread V's and no one is waiting? Increment. What if thread later does P? Decrement and continue. In other words, P + V are commutative -- result is the same no matter what order they occur. Condition variables are not commutative: wait doesn't return until next signal. That's why they must be in a critical section -- need to access state variables to do their job. With monitors, if I signal 15000 times, when no one is waiting, next wait will still go to sleep! But with the above code, next 15000 threads that wait will return immediately!

Implementing Condition Variables using Semaphores (Take 2) wait(lock) { lock.release(); semaphore.P(); lock.acquire(); } signal() { if (semaphore is not empty) semaphore.V(); Does this work? For one, not legal to look at contents of semaphore queue. But also: race condition -- signaller can slip in after lock is released, and before wait. Then waiter never wakes up!   Or: suppose, put thread on separate Condition variable queue. Then release lock, then P. Won't work because a third thread can come in and try to wait, gobbling up the V, so that the original waiter never wakes up! Need to release lock and go to sleep atomically.

Implementing Condition Variables using Semaphores (Take 3) wait(lock) { semaphore = new Semaphore; queue.Append(semaphore); // queue of waiting threads lock.release(); semaphore.P(); lock.acquire(); } signal() { if (!queue.Empty()) { semaphore = queue.Remove(); semaphore.V(); // wake up waiter Does this work? For one, not legal to look at contents of semaphore queue. But also: race condition -- signaller can slip in after lock is released, and before wait. Then waiter never wakes up!   Or: suppose, put thread on separate Condition variable queue. Then release lock, then P. Won't work because a third thread can come in and try to wait, gobbling up the V, so that the original waiter never wakes up! Need to release lock and go to sleep atomically.