Introduction to Concurrency: Synchronization Mechanisms

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
Operating Systems Part III: Process Management (Process Synchronization)
– R 7 :: 1 – 0024 Spring 2010 Parallel Programming 0024 Recitation Week 7 Spring Semester 2010.
Practice Session 7 Synchronization Liveness Deadlock Starvation Livelock Guarded Methods Model Thread Timing Busy Wait Sleep and Check Wait and Notify.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 13: October 12, 2010 Instructor: Bhuvan Urgaonkar.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Semaphores CSCI 444/544 Operating Systems Fall 2008.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Concurrency, Mutual Exclusion and Synchronization.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Semaphores, Locks and Monitors By Samah Ibrahim And Dena Missak.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
4061 Session 21 (4/3). Today Thread Synchronization –Condition Variables –Monitors –Read-Write Locks.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
SPL/2010 Guarded Methods and Waiting 1. SPL/2010 Reminder! ● Concurrency problem: asynchronous modifications to object states lead to failure of thread.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Monitors and Blocking Synchronization Dalia Cohn Alperovich Based on “The Art of Multiprocessor Programming” by Herlihy & Shavit, chapter 8.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-5 Process Synchronization Department of Computer Science and Software.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
Lecture 6: Monitors & Semaphores. Monitor Contains data and procedures needed to allocate shared resources Accessible only within the monitor No way for.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Multi Threading.
Process Synchronization: Semaphores
Background on the need for Synchronization
Chapter 5: Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
Lecture 14: Pthreads Mutex and Condition Variables
Threading And Parallel Programming Constructs
Lecture 2 Part 2 Process Synchronization
Critical section problem
Semaphores Chapter 6.
Concurrency: Mutual Exclusion and Process Synchronization
CSCI1600: Embedded and Real Time Software
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
Lecture 9 Synchronization.
Lecture 14: Pthreads Mutex and Condition Variables
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSCI1600: Embedded and Real Time Software
CSE 542: Operating Systems
CSE 542: Operating Systems
Synchronization and liveness
Presentation transcript:

Introduction to Concurrency: Synchronization Mechanisms

Mutual Exclusion The ability to allow single access to a critical section at any given time A process remains inside the critical section for a finite time only A process is not delayed if the no other process in critical section This ability is never interrupted (Thread.interrupt() will not affect it) Can be implemented using software or hardware: Locks - OS level (software) Mutex, Semaphore, ReadWriteLock Spin-locks Using atomic operations – Hardware level (CPU instructions)

Processes States Blocked = ready queue

Monitors Programming language constructs that control access to shared data. Monitors contain: Lock – allows mutual exclusion Condition variables – allows proper scheduling Monitors Ensure: Shared data structure Protests the data from incorrect concurrent access or modification. Procedures Ensures procedures do not conflict when ran concurrently. Synchronization Enforces synchronization between concurrent procedure invocation. In other words: allows mutual exclusion Example: Java “synchronized” keyword.

Hoare: Monitor Styles The thread uses notify() and releases the lock (gets suspended) The notified (waiting) thread acquires the lock The lock is transferred from the notifying to the notified thread It assumes condition is met! Once done, it returns the lock to the thread initiated notify()

Mesa: Monitor Styles The thread uses notify() keeps the lock The notified threads – is blocked (ready queue) until they acquire it. It competes with othe notified threads competing for the lock Once acquired the thread checks if the condition is met If met, executes the critical section, and then releases the lock. If not, goes back into blocked mode releasing the lock.

Mutex Special objects are owned by a thread. Used to allow mutual exclusion for critical sections. By using wait()/notify(). Wait() – causes the thread to enter wait mode Notify() – “awakens” a thread and moves it from wait to blocked mode Thread moves to running mode once it acquires the lock Thread must unlock which he has locked. In case of locking failure, the owning thread goes to wait mode.

C++14 Mutex Example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 #include <iostream> #include <vector> #include <string> #include <chrono> #include <thread> #include <mutex> int ctr = 0; std::mutex mutex; void increment(int n) { int i = 0; while (i < 10) { mutex.lock(); //entering critical section ctr++; mutex.unlock(); //leaving critical section i++; }

What is the output result? 16 17 18 19 20 21 22 23 24 25 int main() { std::vector<std::thread> v; for (int n = 0; n < 10; ++n) { v.emplace_back(increment, n); //runs constructor of std::thread } for (auto& t : v) { t.join(); std::cout << ctr << std::endl;

Semaphore An integer used to keep track of resources available among processes. Allows limited reading/writing access, depending on its integer value. What about binary Semaphore? Acts like a mutex! API contains three operations: Initialize decrement increment Uses wait(): value = 1 decrement() is called Uses notify(): value = 0 increment() is called

Pseudo Code: Semaphore Semaphore(max, fifo) Creates a new semaphore with the given maximum value and fifo flag. If fifo is true, threads block in a FIFO queue so a release always wakes the one blocked the longest; otherwise they block in a set and a release wakes an arbitrary blocked thread. acquire() atomically {if (value > 0) value--; else block on s} release() atomically {     if (there are threads blocked on s)         wake one of them     else if (value == MAX) //optional         fail     else         value++ }

Java8 Binary Semaphore Example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 class Counter{ int ctr =0; Semaphore sem = new Semaphore(1); public void inc() throws InterruptedException{ sem.acquire(); ctr++; sem.release(); } public String toString(){ return Integer.toString(ctr);

What is the output result? 1 2 3 4 5 6 7 8 9 10 public static void main(String[] args) throws InterruptedException { Counter ctr = new Counter(); for (int i=0;i<10;i++){ Runnable r = ()-> { for (int j=0;j<10;j++) ctr.inc(); }; new Thread(r).start(); } Thread.sleep(500); //main does not wait for threads System.out.println(ctr.toString());

Atomic Operations: Hardware Mutual Exclusion Instruction implemented in the hardware level (CPU instruction) An atomic operation works by locking the affected memory address in the CPU (shared L3) cache. The CPU acquires the memory address exclusively in its cache It does not permit any other CPU to acquire or share that address until the operation completes. If value is in cache, other CPU will not access original memory address but instead, will access the cache address. Can be interrupted!

Hardware Instruction Example 1 2 3 4 5 bool test_and_set (bool *lock){ bool oldval = *lock; *word = true; return oldval; } Test-and-Set: The entire function runs atomically: Tests a lock – by returning its value Sets the lock If returned value is false, then the lock is obtained. If returned value is true, then the lock is already in use.

Spinlocks 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1617 bool flag = false; int counter = 0; void increment(){ int i = 0; while(i < 10){ while(test_and_set(&flag)); //spinlock counter++; flag = false; i++; } int main() { std::vector<std::thread> v; for (int n = 0; n < 10; ++n) { v.emplace_back(count, n); //runs ctor of std::thread for (auto& t : v) { t.join(); } std::cout << ctr << std::endl; Uses ‘busy-waiting’ until lock is acquired. Suitable for short period Threads do not go to wait mode! As long as test_and_set returns true, we are locked into a loop. Once a false value is returned, we run the critical section. Once done, we change the lock value back to false.

C++14 Spinlocks C++ has built in support for atomic instructions. We can use test_and_set to build a spinlock class. Class “atomic”: std::atomic_flag test_and_set(std::memory_order) Modes: std::memory_order_release std::memory_order_acquire See example next slide.

1 2 3 4 5 6 7 8 9 10 11 12 13 14151617 #include <thread> #include <vector> #include <iostream> #include <atomic> std::atomic_flag lock = ATOMIC_FLAG_INIT; int ctr = 0; void count(int n) { for (int cnt = 0; cnt < 10; ++cnt) { while (lock.test_and_set(std::memory_order_acquire)) // acquire lock ; // spin ctr++; lock.clear(std::memory_order_release); // release lock }

What is the output result? 16 17 18 19 20 21 22 23 24 25 int main() { std::vector<std::thread> v; for (int n = 0; n < 10; ++n) { v.emplace_back(count, n); //runs constructor of std::thread } for (auto& t : v) { t.join(); std::cout << ctr << std::endl;

Hardware Mutual Exclusion Advantages: Works for single and multi-core processors sharing main memory. Simple implementation and simple usage Can be used to support multiple critical sections Disadvantages: Atomic operations are busy-waiting Consume CPU time Starvation is possible In case where more than one process is waiting

Condition Variables Used in cases where a condition must be met before proceeding. Basically: The shared object between two threads and used for wait/notify If the condition is not met: wait() is initiated. A thread is blocked until condition holds One blocked, the thread must release all locks Once the condition is met: notify() is initiated A blocked thread un-blocks Tries to acquire locks before proceeding (might get blocked doing so!) Checks if the condition is still intact Proceeds if so, otherwise blocks again.

C++14 Condition Variable API #include <condition_variable> Wait Functions: wait (wait until notified) wait_for (wait for timeout or until notified) wait_until (wait until notified or time point) Notify Functions: notify_one (awakes one waiting thread, undefined who!) notify_all (awakes all waiting threads)

Using C++14 Condition Variable #include <condition_variable> Thread intending to modify a condition variable must: Lock the condition variable (using a mutex) Perform modification while holding lock Notifies other thread(s) after the change Thread intending to wait on the condition variable must: Lock the condition variable (using same mutex) Waits on the condition variable Once notified: Re-acquire lock on the condition variable The condition variable is checked again, proceeds if condition is fulfilled.

C++14 Condition Variable Example 1 2 3 4 5 6 7 8 9 10 11 12 #include <mutex> #include <condition_variable> #include <queue> template <typename T> class Queue { ... std::queue<T> queue; std::mutex mutex; std::condition_variable cond; }; This is the core of the blocking queue (if empty). The size of the queue is unlimited – there is no “full” mode.

pop() 1 2 3 4 5 6 7 8 9 10 11 T pop() { std::unique_lock<std::mutex> mlock(mutex); //locks scope! while (queue.empty()) cond.wait(mlock); } auto item = queue.front(); queue.pop(); return item;

push() 1 2 3 4 5 6 7 void push(const T& item) { std::unique_lock<std::mutex> mlock(mutex); queue.push(item); mlock.unlock(); cond.notify_one(); } We have no ceiling for push Queue of unlimited size

Java ReadWriteLock Interface Allows multiple read access but single write access! Improves performance with large amount of read access and much less access by writers. Granting “read” access: No threads are waiting No threads requested write access Granting “write” access: No threads are reading No threads are writing How is this done? Using two locks! readLock() writeLock()

Issues with ReadWriteLock Wakes up all waiting threads whenever the lock becomes available Wasteful – it is enough one reading thread wants access to stop all writers from accessing We do still want to wake up all reader threads to allow concurrent access notify() is not a good solution No fairness. Readers starve writers Constant flow of readers requires writers to wait for a long time Locks aren't dealt out in the order they are requested To solve this, a queue is required Open Issues: Promoting read lock to write lock? Demoting write lock to read lock? How to implement promotion/demotion? Release and acquire the new kind again? Doesn’t seem efficient What about implementing such a feature?

Practical Session 1&2 Java8 Concurrency: C++11 Concurrency: Guide 1, StampedLock C++11 Concurrency: Guide 1: Link 1, Link 2, Link 3, Link 4 C++14 Concurrency: Guide 1 Assignment 1 (Part 1) C++: std::chorno, std::thread, std::thread::yield, std::condition_variable Java: Java7/8 mutual exclusion tools StampedLock vs ReaderWriterLock vs Synchronized Performance? When to use what? Why? readme! What is lock contention? readme! Java8 vs C++14 Differences in synchronization tools? Which language doesn’t have what? Why?

Thank you for coming!