Lecture 8 Page 1 CS 111 Online Implementing Locks Create a synchronization object – Associated it with a critical section – Of a size that an atomic instruction.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

CS492B Analysis of Concurrent Programs Lock Basics Jaehyuk Huh Computer Science, KAIST.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Parallel Processing (CS526) Spring 2012(Week 6).  A parallel algorithm is a group of partitioned tasks that work with each other to solve a large problem.
CS444/CS544 Operating Systems Synchronization 2/21/2006 Prof. Searleman
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
Semaphores. Announcements No CS 415 Section this Friday Tom Roeder will hold office hours Homework 2 is due today.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
CS 153 Design of Operating Systems Spring 2015 Lecture 11: Scheduling & Deadlock.
Threads in Java. History  Process is a program in execution  Has stack/heap memory  Has a program counter  Multiuser operating systems since the sixties.
Lecture 9 Page 1 CS 111 Online Deadlock What is a deadlock? A situation where two entities have each locked some resource Each needs the other’s locked.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
11/18/20151 Operating Systems Design (CS 423) Elsa L Gunter 2112 SC, UIUC Based on slides by Roy Campbell, Sam.
Kernel Locking Techniques by Robert Love presented by Scott Price.
Lecture 8 Page 1 CS 111 Online Other Important Synchronization Primitives Semaphores Mutexes Monitors.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
Lecture 6 Page 1 CS 111 Summer 2014 Concurrency Solutions and Deadlock CS 111 Operating Systems Peter Reiher.
Lecture 6 Page 1 CS 111 Online Preemptive Scheduling Again in the context of CPU scheduling A thread or process is chosen to run It runs until either it.
CS 3204 Operating Systems Godmar Back Lecture 7. 12/12/2015CS 3204 Fall Announcements Project 1 due on Sep 29, 11:59pm Reading: –Read carefully.
Lecture 8 Page 1 CS 111 Fall 2015 Synchronization, Critical Sections and Concurrency CS 111 Operating Systems Peter Reiher.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
CSCI-375 Operating Systems Lecture Note: Many slides and/or pictures in the following are adapted from: slides ©2005 Silberschatz, Galvin, and Gagne Some.
Monitors and Blocking Synchronization Dalia Cohn Alperovich Based on “The Art of Multiprocessor Programming” by Herlihy & Shavit, chapter 8.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
Lecture 9 Page 1 CS 111 Online Deadlock Prevention Deadlock avoidance tries to ensure no lock ever causes deadlock Deadlock prevention tries to assure.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
Lecture 12 Page 1 CS 111 Online Using Devices and Their Drivers Practical use issues Achieving good performance in driver use.
Lecture 6 Page 1 CS 111 Summer 2013 Concurrency Solutions and Deadlock CS 111 Operating Systems Peter Reiher.
Lecture 8 Page 1 CS 111 Winter 2014 Synchronization, Critical Sections and Concurrency CS 111 Operating Systems Peter Reiher.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Lecture 4 Page 1 CS 111 Summer 2013 Scheduling CS 111 Operating Systems Peter Reiher.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Parallelism and Concurrency
CSE 120 Principles of Operating
CS703 – Advanced Operating Systems
Designing Parallel Algorithms (Synchronization)
UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department
Background and Motivation
Outline Parallelism and synchronization
Implementing Mutual Exclusion
CSCI1600: Embedded and Real Time Software
Implementing Mutual Exclusion
Kernel Synchronization II
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
CSCI1600: Embedded and Real Time Software
CSE 451 Section 1/27/2000.
CSE 153 Design of Operating Systems Winter 2019
EECE.4810/EECE.5730 Operating Systems
CSE 542: Operating Systems
CSE 542: Operating Systems
CS444/544 Operating Systems II Scheduler
Presentation transcript:

Lecture 8 Page 1 CS 111 Online Implementing Locks Create a synchronization object – Associated it with a critical section – Of a size that an atomic instruction can manage Lock the object to seize the critical section – If critical section is free, lock operation succeeds – If critical section is already in use, lock operation fails It may fail immediately It may block until the critical section is free again Unlock the object to release critical section – Subsequent lock attempts can now succeed – May unblock a sleeping waiter

Lecture 8 Page 2 CS 111 Online Using Atomic Instructions to Implement a Lock Assuming C implementation of test and set bool getlock( lock *lockp) { if (TS(lockp) == 0 ) return( TRUE); else return( FALSE); } void freelock( lock *lockp ) { *lockp = 0; }

Lecture 8 Page 3 CS 111 Online Associating the Lock With a Critical Section Assuming same lock as in last example while (!getlock(crit_section_lock)) { yield(); /*or spin on lock */ } critical_section(); /*Access critical section */ freelock(crit_section_lock); Remember, while you’re in the critical section, no one else will be able to get the lock − Better not stay there too long − And definitely don’t go into infinite loop

Lecture 8 Page 4 CS 111 Online Where Do We Put the Locking? Objects A and B share a critical section A and B are called by C and D, respectively Who locks, A/B or C/D? A/B - Down low in the detailed implementation – Object oriented modularity recommends this – Locking is part of the implementation C/D - Up high in the more semantically meaningful calling code – Locking needs may depend on how object is used – One logical transaction may span many method calls – In such cases, only the caller knows start/end/scope What happens if we lock both high and low?

Lecture 8 Page 5 CS 111 Online Advisory vs. Enforced Locking Enforced locking – Happens whether or not the caller wants it – Done within the implementation of object methods Advisory locking – A convention that “good guys” are expected to follow – Users expected to lock object before calling methods Enforced locking is guaranteed to happen – It may sometimes be excessively conservative Advisory locking allows users more flexibility – Including the flexibility to do it wrong (or not at all) Would it ever make sense for a given lock to be enforced for some threads and advisory for other threads? What would the implications be?

Lecture 8 Page 6 CS 111 Online Criteria for Correct Locking How do we know if a locking mechanism is correct? Four desirable criteria: 1.Correct mutual exclusion  Only one thread at a time has access to critical section 2.Progress  If resource is available, and someone wants it, they get it 3.Bounded waiting time  No indefinite waits, guaranteed eventual service 4.And (ideally) fairness  E.g. FIFO

Lecture 8 Page 7 CS 111 Online Asynchronous Completion The second big problem with parallelism – How to wait for an event that may take a while – Without wasteful spins/busy-waits Examples of asynchronous completions – Waiting for a held lock to be released – Waiting for an I/O operation to complete – Waiting for a response to a network request – Delaying execution for a fixed period of time

Lecture 8 Page 8 CS 111 Online Using Spin Waits to Solve the Asynchronous Completion Problem Thread A needs something from thread B – Like the result of a computation Thread B isn’t done yet Thread A stays in a busy loop waiting Sooner or later thread B completes Thread A exits the loop and makes use of B’s result Definitely provides correct behavior, but...

Lecture 8 Page 9 CS 111 Online Well, Why Not? Waiting serves no purpose for the waiting thread – “Waiting” is not a “useful computation” Spin waits reduce system throughput – Spinning consumes CPU cycles – These cycles can’t be used by other threads – It would be better for waiting thread to “yield” They are actually counter-productive – Delays the thread that will post the completion – Memory traffic slows I/O and other processors

Lecture 8 Page 10 CS 111 Online Another Solution Completion blocks Create a synchronization object – Associate that object with a resource or request Requester blocks awaiting event on that object – Yield the CPU until awaited event happens Upon completion, the event is “posted” – Thread that notices/causes event posts the object Posting event to object unblocks the waiter – Requester is dispatched, and processes the event

Lecture 8 Page 11 CS 111 Online Blocking and Unblocking Exactly as discussed in scheduling lecture Blocking – Remove specified process from the “ready” queue – Yield the CPU (let scheduler run someone else) Unblocking – Return specified process to the “ready” queue – Inform scheduler of wakeup (possible preemption) Only trick is arranging to be unblocked – Because it is so embarrassing to sleep forever

Lecture 8 Page 12 CS 111 Online Unblocking and Synchronization Objects Easy if only one thread is blocked on the object If multiple blocked threads, who should we unblock? – Everyone who is blocked? – One waiter, chosen at random? – The next thread in line on a FIFO queue? Depends on the resource – Can multiple threads use it concurrently? – If not, awaking multiple threads is wasteful Depends on policy – Should scheduling priority be used? – Consider possibility of starvation

Lecture 8 Page 13 CS 111 Online The Thundering Herd Problem What if a large number of threads are blocked on a single resource? When the thread holding that resource unblocks, you wake them all up They contend for the resource and one gets it The rest get put to sleep And eventually it happens again But waking and sleeping many threads is expensive itself If this happens a lot, poor performance

Lecture 8 Page 14 CS 111 Online A Possible Problem The sleep/wakeup race condition void sleep( eventp *e ) { while(e->posted == FALSE) { add_to_queue( &e->queue, myproc ); myproc->runstate |= BLOCKED; yield(); } void wakeup( eventp *e) { struct proce *p; e->posted = TRUE; p = get_from_queue(&e-> queue); if (p) { p->runstate &= ~BLOCKED; resched(); } /* if !p, nobody’s waiting */ } Consider this sleep code: And this wakeup code: What’s the problem with this?

Lecture 8 Page 15 CS 111 Online A Sleep/Wakeup Race Let’s say thread B is using a resource and thread A needs to get it So thread A will call sleep() Meanwhile, thread B finishes using the resource – So thread B will call wakeup() No other threads are waiting for the resource

Lecture 8 Page 16 CS 111 Online The Race At Work void sleep( eventp *e ) { while(e->posted == FALSE) { void wakeup( eventp *e) { struct proce *p; e->posted = TRUE; p = get_from_queue(&e-> queue); if (p) { } /* if !p, nobody’s waiting */ } Nope, nobody’s in the queue! add_to_queue( &e->queue, myproc ); myproc->runsate |= BLOCKED; yield(); } Yep, somebody’s locked it! Thread A Thread B The effect? Thread A is sleeping But there’s no one to wake him up CONTEXT SWITCH!

Lecture 8 Page 17 CS 111 Online Solving the Problem There is clearly a critical section in sleep() – Starting before we test the posted flag – Ending after we put ourselves on the notify list During this section, we need to prevent – Wakeups of the event – Other people waiting on the event This is a mutual-exclusion problem – Fortunately, we already know how to solve those

Lecture 8 Page 18 CS 111 Online Synchronization Objects Combine mutual exclusion and (optional) waiting Operations implemented safely – With atomic instructions – With interrupt disables Exclusion policies (one-only, read-write) Waiting policies (FCFS, priority, all-at-once) Additional operations (queue length, revoke)

Lecture 8 Page 19 CS 111 Online Lock Contention The riddle of parallel multi-tasking: – If one task is blocked, CPU runs another – But concurrent use of shared resources is difficult – Critical sections serialize tasks, eliminating parallelism What if everyone needs to share one resource? – One process gets the resource – Other processes get in line behind him – Parallelism is eliminated; B runs after A finishes – That resource becomes a bottle-neck

Lecture 8 Page 20 CS 111 Online What If It Isn’t That Bad? Say each thread is only somewhat likely to need a resource Consider the following system – Ten processes, each runs once per second – One resource they all use 5% of time (5ms/sec) – Half of all time slices end with a preemption Chances of preemption while in critical section – Per slice: 2.5%, per sec: 22%, over 10 sec: 92% Chances a 2nd process will need resource – 5% in next time slice, 37% in next second But once this happens, a line forms

Lecture 8 Page 21 CS 111 Online Resource Convoys All processes regularly need the resource – But now there is a waiting line – Nobody can “just use the resource”, must get in line The delay becomes much longer – We don’t just wait a few  sec until resource is free – We must wait until everyone in front of us finishes – And while we wait, more people get into the line Delays rise, throughput falls, parallelism ceases Not merely a theoretical transient response

Lecture 8 Page 22 CS 111 Online Resource Convoy Performance throughput offered load ideal convoy

Lecture 8 Page 23 CS 111 Online Avoiding Contention Problems Eliminate the critical section entirely – Eliminate shared resource, use atomic instructions Eliminate preemption during critical section – By disabling interrupts … not always an option Reduce lingering time in critical section – Minimize amount of code in critical section – Reduce likelihood of blocking in critical section Reduce frequency of critical section entry – Reduce use of the serialized resource – Spread requests out over more resources

Lecture 8 Page 24 CS 111 Online An Approach Based on Smarter Locking Reads and writes are not equally common – File read/write: reads/writes > 50 – Directory search/create: reads/writes > 1000 Writers generally need exclusive access Multiple readers can generally share a resource Read/write locks – Allow many readers to share a resource – Only enforce exclusivity when a writer is active

Lecture 8 Page 25 CS 111 Online Lock Granularity How much should one lock cover? – One object or many – Important performance and usability implications Coarse grained - one lock for many objects – Simpler, and more idiot-proof – Results in greater resource contention Fine grained - one lock per object – Spreading activity over many locks reduces contention – Time/space overhead, more locks, more gets/releases – Error-prone: harder to decide what to lock when – Some operations may require locking multiple objects (which creates a potential for deadlock)

Lecture 8 Page 26 CS 111 Online Lock Granularity: Pools Vs. Elements Consider a pool of objects, each with its own lock Most operations lock only one buffer within the pool Some operations require locking the entire pool – Two threads both try to add block AA to the cache – Thread 1 looks for block B while thread 2 is deleting it The pool lock could become a bottle-neck – Minimize its use, reader/writer locking, sub-pools... buffer Abuffer Bbuffer C buffer Dbuffer E... pool of file system cache buffers