13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
1 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
CSE 451: Operating Systems Winter 2012 Synchronization Mark Zbikowski Gary Kimura.
CSE 451: Operating Systems Spring 2012 Module 7 Synchronization Ed Lazowska Allen Center 570.
Synchronization. Shared Memory Thread Synchronization Threads cooperate in multithreaded environments – User threads and kernel threads – Share resources.
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
CS444/CS544 Operating Systems Synchronization 2/21/2006 Prof. Searleman
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
CS510 Concurrent Systems Introduction to Concurrency.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
1 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Implementing Synchronization. Synchronization 101 Synchronization constrains the set of possible interleavings: Threads “agree” to stay out of each other’s.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
1 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
CENG334 Introduction to Operating Systems
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
IT 344: Operating Systems Module 6 Synchronization Chia-Chi Teng CTB 265.
1 Previous Lecture Overview  semaphores provide the first high-level synchronization abstraction that is possible to implement efficiently in OS. This.
CSE 153 Design of Operating Systems Winter 2015 Lecture 5: Synchronization.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
1 CSE451 Basic Synchronization Autumn 2002 Gary Kimura Lecture #7 & 8 October 14 & 16, 2002.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
CSE 451: Operating Systems Spring 2010 Module 7 Synchronization John Zahorjan Allen Center 534.
CSE 120 Principles of Operating
CS703 – Advanced Operating Systems
Background on the need for Synchronization
Synchronization.
Synchronization.
143a discussion session week 3
UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department
Synchronization, Monitors, Deadlocks
CENG334 Introduction to Operating Systems
Implementing Mutual Exclusion
Implementing Mutual Exclusion
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
Kernel Synchronization II
CSE 451: Operating Systems Autumn 2004 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2004 Module 6 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSE 451: Operating Systems Spring 2008 Module 7 Synchronization
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
CSCI1600: Embedded and Real Time Software
Outline Chapter 3: Processes Chapter 4: Threads So far - Next -
CSE 451: Operating Systems Autumn 2009 Module 7 Synchronization
CSE 451: Operating Systems Spring 2007 Module 7 Synchronization
Presentation transcript:

13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL: Threads and Synchronization Topics: Using threads Implementation of threads Synchronization problem Race conditions and Critical Sections Mutual exclusion Locks Spinlocks Mutexes

13/03/07 Single and Multithreaded Processes

3 Using Pthreads #include int sum; /* this data is shared by the thread(s) */ void *runner(void *param); /* the thread */ int main(int argc, char *argv[]){ pthread_t tid; /* the thread identifier */ pthread_attr_t attr; /* set of attributes for the thread */ pthread_attr_init(&attr);/* get the default attributes */ pthread_create(&tid,&attr,runner,argv[1]);/* create the thread */ pthread_join(tid,NULL); /* now wait for the thread to exit */ printf("sum = %d\n",sum); } void *runner(void *param){ /* The thread will begin control in this function int i, upper = atoi(param); sum = 0; if (upper > 0) { for (i = 1; i <= upper; i++) sum += i; } pthread_exit(0); } Adapted from Matt Welsh’s (Harvard University) slides.

4 Synchronization Threads cooperate in multithreaded programs in several ways: Access to shared state ● e.g., multiple threads accessing a memory cache in a Web server To coordinate their execution ● e.g., Pressing stop button on browser cancels download of current page ● “stop button thread” has to signal the “download thread” For correctness, we have to control this cooperation ● Must assume threads interleave executions arbitrarily and at different rates ● scheduling is not under application’s control ● We control cooperation using synchronization ● enables us to restrict the interleaving of executions Adapted from Matt Welsh’s (Harvard University) slides.

5 Shared Resources We’ll focus on coordinating access to shared resources Basic problem: ● Two concurrent threads are accessing a shared variable ● If the variable is read/modified/written by both threads, then access to the variable must be controlled ● Otherwise, unexpected results may occur We’ll look at: ● Mechanisms to control access to shared resources ● Low-level mechanisms: locks ● Higher level mechanisms: mutexes, semaphores, monitors, and condition variables ● Patterns for coordinating access to shared resources ● bounded buffer, producer-consumer, … This stuff is complicated and rife with pitfalls ● Details are important for completing assignments ● Expect questions on the midterm/final! Adapted from Matt Welsh’s (Harvard University) slides.

6 Shared Variable Example Suppose we implement a function to withdraw money from a bank account: int withdraw(account, amount) { balance = get_balance(account); balance = balance - amount; put_balance(account, balance); return balance; } Now suppose that you and your friend share a bank account with a balance of $ ● What happens if you both go to separate ATM machines, and simultaneously withdraw $ from the account? Adapted from Matt Welsh’s (Harvard University) slides.

7 Example continued We represent the situation by creating a separate thread for each ATM user doing a withdrawal ● Both threads run on the same bank server system Thread 1 Thread 2 What’s the problem with this? ● What are the possible balance values after each thread runs? int withdraw(account, amount) { balance = get_balance(account); balance -= amount; put_balance(account, balance); return balance; } int withdraw(account, amount) { balance = get_balance(account); balance -= amount; put_balance(account, balance); return balance; } Adapted from Matt Welsh’s (Harvard University) slides.

8 Interleaved Execution The execution of the two threads can be interleaved ● Assume preemptive scheduling ● Each thread can context switch after each instruction ● We need to worry about the worst-case scenario! What’s the account balance after this sequence? ● And who's happier, the bank or you??? balance = get_balance(account); balance -= amount; balance = get_balance(account); balance -= amount; put_balance(account, balance); Execution sequence as seen by CPU context switch Adapted from Matt Welsh’s (Harvard University) slides.

9 Interleaved Execution The execution of the two threads can be interleaved ● Assume preemptive scheduling ● Each thread can context switch after each instruction ● We need to worry about the worst-case scenario! What’s the account balance after this sequence? ● And who's happier, the bank or you??? balance = get_balance(account); balance -= amount; balance = get_balance(account); balance -= amount; put_balance(account, balance); Execution sequence as seen by CPU Balance = $1500 Balance = $1400 Balance = $1400! Local = $1400 Adapted from Matt Welsh’s (Harvard University) slides.

10 Race Conditions The problem is that two concurrent threads access a shared resource without any synchronization ● This is called a race condition ● The result of the concurrent access is non-deterministic ● Result depends on: ● Timing ● When context switches occurred ● Which thread ran at context switch ● What the threads were doing We need mechanisms for controlling access to shared resources in the face of concurrency ● This allows us to reason about the operation of programs ● Essentially, we want to re-introduce determinism into the thread's execution Synchronization is necessary for any shared data structure ● buffers, queues, lists, hash tables, … Adapted from Matt Welsh’s (Harvard University) slides.

11 Which resources are shared? Local variables in a function are not shared ● They exist on the stack, and each thread has its own stack ● You can't safely pass a pointer from a local variable to another thread ● Why? Global variables are shared ● Stored in static data portion of the address space ● Accessible by any thread Dynamically-allocated data is shared ● Stored in the heap, accessible by any thread Stack for thread 0 Heap Initialized vars (data segment) Code (text segment) Uninitialized vars (BSS segment) Stack for thread 1 Stack for thread 2 (Reserved for OS) Shared Unshared Adapted from Matt Welsh’s (Harvard University) slides.

12 Mutual Exclusion We want to use mutual exclusion to synchronize access to shared resources ● Meaning: When only one thread can access a shared resource at a time. Code that uses mutual exclusion to synchronize its execution is called a critical section ● Only one thread at a time can execute code in the critical section ● All other threads are forced to wait on entry ● When one thread leaves the critical section, another can enter Critical Section Thread 1 (modify account balance) Adapted from Matt Welsh’s (Harvard University) slides.

13 Mutual Exclusion We want to use mutual exclusion to synchronize access to shared resources ● Meaning: When only one thread can access a shared resource at a time. Code that uses mutual exclusion to synchronize its execution is called a critical section ● Only one thread at a time can execute code in the critical section ● All other threads are forced to wait on entry ● When one thread leaves the critical section, another can enter Thread 2 Critical Section Thread 1 (modify account balance) 2 nd thread must wait for critical section to clear Adapted from Matt Welsh’s (Harvard University) slides.

14 Mutual Exclusion We want to use mutual exclusion to synchronize access to shared resources ● Meaning: When only one thread can access a shared resource at a time. Code that uses mutual exclusion to synchronize its execution is called a critical section ● Only one thread at a time can execute code in the critical section ● All other threads are forced to wait on entry ● When one thread leaves the critical section, another can enter Critical Section Thread 1 (modify account balance) 1 st thread leaves critical section Thread 2 2 nd thread free to enter Adapted from Matt Welsh’s (Harvard University) slides.

15 Critical Section Requirements Mutual exclusion ● At most one thread is currently executing in the critical section Progress ● If thread T1 is outside the critical section, then T1 cannot prevent T2 from entering the critical section Bounded waiting (no starvation) ● If thread T1 is waiting on the critical section, then T1 will eventually enter the critical section ● Assumes threads eventually leave critical sections Performance ● The overhead of entering and exiting the critical section is small with respect to the work being done within it Adapted from Matt Welsh’s (Harvard University) slides.

16 Locks A lock is a object (in memory) that provides the following two operations: –acquire( ): a thread calls this before entering a critical section ● May require waiting to enter the critical section –release( ): a thread calls this after leaving a critical section ● Allows another thread to enter the critical section A call to acquire( ) must have a corresponding call to release( ) ● Between acquire( ) and release( ), the thread holds the lock ● acquire( ) does not return until the caller holds the lock ● At most one thread can hold a lock at a time (usually!) ● We'll talk about the exceptions later... What can happen if acquire( ) and release( ) calls are not paired? Adapted from Matt Welsh’s (Harvard University) slides.

17 Using Locks int withdraw(account, amount) { acquire(lock); balance = get_balance(account); balance -= amount; put_balance(account, balance); release(lock); return balance; } critical section Adapted from Matt Welsh’s (Harvard University) slides.

18 Execution with Locks What happens when the blue thread tries to acquire the lock? acquire(lock); balance = get_balance(account); balance -= amount; balance = get_balance(account); balance -= amount; put_balance(account, balance); release(lock); put_balance(account, balance); release(lock); acquire(lock); Thread 1 runs Thread 2 waits on lock Thread 1 completes Thread 2 resumes Adapted from Matt Welsh’s (Harvard University) slides.

19 Spinlocks Very simple way to implement a lock: Why doesn't this work? ● Where is the race condition? struct lock { int held = 0; } void acquire(lock) { while (lock->held); lock->held = 1; } void release(lock) { lock->held = 0; } The caller busy waits for the lock to be released Adapted from Matt Welsh’s (Harvard University) slides.

20 Implementing Spinlocks Problem is that the internals of the lock acquire/release have critical sections too! ● The acquire( ) and release( ) actions must be atomic ● Atomic means that the code cannot be interrupted during execution ● “All or nothing” execution struct lock { int held = 0; } void acquire(lock) { while (lock->held); lock->held = 1; } void release(lock) { lock->held = 0; } What can happen if there is a context switch here? Adapted from Matt Welsh’s (Harvard University) slides.

21 Implementing Spinlocks Problem is that the internals of the lock acquire/release have critical sections too! ● The acquire( ) and release( ) actions must be atomic ● Atomic means that the code cannot be interrupted during execution ● “All or nothing” execution struct lock { int held = 0; } void acquire(lock) { while (lock->held); lock->held = 1; } void release(lock) { lock->held = 0; } This sequence needs to be atomic Adapted from Matt Welsh’s (Harvard University) slides.

22 Implementing Spinlocks Problem is that the internals of the lock acquire/release have critical sections too! ● The acquire( ) and release( ) actions must be atomic ● Atomic means that the code cannot be interrupted during execution ● “All or nothing” execution Doing this requires help from hardware! ● Disabling interrupts ● Why does this prevent a context switch from occurring? ● Atomic instructions – CPU guarantees entire action will execute atomically ● Test-and-set ● Compare-and-swap Adapted from Matt Welsh’s (Harvard University) slides.

23 Spinlocks using test-and-set CPU provides the following as one atomic instruction: So to fix our broken spinlocks, we do this: bool test_and_set(bool *flag) { bool old = *flag; *flag = True; return old; } struct lock { int held = 0; } void acquire(lock) { while(test_and_set(&lock->held)); } void release(lock) { lock->held = 0; } Adapted from Matt Welsh’s (Harvard University) slides.

24 What's wrong with spinlocks? OK, so spinlocks work (if you implement them correctly), and they are simple. So what's the catch? struct lock { int held = 0; } void acquire(lock) { while(test_and_set(&lock->held)); } void release(lock) { lock->held = 0; } Adapted from Matt Welsh’s (Harvard University) slides.

25 Problems with spinlocks Horribly wasteful! ● Threads waiting to acquire locks spin on the CPU ● Eats up lots of cycles, slows down progress of other threads ● Note that other threads can still run... how? ● What happens if you have a lot of threads trying to acquire the lock? Only want spinlocks as primitives to build higher-level synchronization constructs Adapted from Matt Welsh’s (Harvard University) slides.

26 Disabling Interrupts An alternative to spinlocks: ● Can two threads disable/reenable interrupts at the same time? What's wrong with this approach? struct lock { // Note – no state! } void acquire(lock) { cli(); // disable interrupts } void release(lock) { sti(); // reenable interupts } Adapted from Matt Welsh’s (Harvard University) slides.

27 Disabling Interrupts An alternative to spinlocks: ● Can two threads disable/reenable interrupts at the same time? What's wrong with this approach? ● Can only be implemented at kernel level (why?) ● Inefficient on a multiprocessor system (why?) ● All locks in the system are mutually exclusive ● No separation between different locks for different bank accounts struct lock { // Note – no state! } void acquire(lock) { cli(); // disable interrupts } void release(lock) { sti(); // reenable interupts } Adapted from Matt Welsh’s (Harvard University) slides.

28 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block ● Put the thread to sleep until it can enter the critical section ● Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 unlocked Lock state Ø 1) Check lock state ??? Adapted from Matt Welsh’s (Harvard University) slides.

29 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block ● Put the thread to sleep until it can enter the critical section ● Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 Ø 1) Check lock state 2) Set state to locked 3) Enter critical section locked Lock state Adapted from Matt Welsh’s (Harvard University) slides.

30 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block ● Put the thread to sleep until it can enter the critical section ● Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 Ø 1) Check lock state locked Lock state Thread 2 ??? Adapted from Matt Welsh’s (Harvard University) slides.

31 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block ● Put the thread to sleep until it can enter the critical section ● Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 1) Check lock state locked Lock state Thread 2 2) Add self to wait queue (sleep) Thread 2 Ø Adapted from Matt Welsh’s (Harvard University) slides.

32 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block ● Put the thread to sleep until it can enter the critical section ● Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 1) Check lock state locked Lock state Thread 3 2) Add self to wait queue (sleep) Thread 2 ??? Thread 3 Adapted from Matt Welsh’s (Harvard University) slides.

33 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block ● Put the thread to sleep until it can enter the critical section ● Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 1) Thread 1 finishes critical section locked Lock state Thread 2Thread 3 Adapted from Matt Welsh’s (Harvard University) slides.

34 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block ● Put the thread to sleep until it can enter the critical section ● Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 1 1) Thread 1 finishes critical section 2) Reset lock state to unlocked Thread 2Thread 3 3) Wake one thread from wait queue unlocked Lock state Thread 3 Adapted from Matt Welsh’s (Harvard University) slides.

35 Mutexes – Blocking Locks Really want a thread waiting to enter a critical section to block ● Put the thread to sleep until it can enter the critical section ● Frees up the CPU for other threads to run Straightforward to implement using our TCB queues! Lock wait queue Thread 3 can now grab lock and enter critical section Thread 2Thread 3 locked Lock state Adapted from Matt Welsh’s (Harvard University) slides.

36 Limitations of locks Locks are great, and simple. What can they not easily accomplish? What if you have a data structure where it's OK for many threads to read the data, but only one thread to write the data? ● Bank account example. ● Locks only let one thread access the data structure at a time. Adapted from Matt Welsh’s (Harvard University) slides.

37 Limitations of locks Locks are great, and simple. What can they not easily accomplish? What if you have a data structure where it's OK for many threads to read the data, but only one thread to write the data? ● Bank account example. ● Locks only let one thread access the data structure at a time. What if you want to protect access to two (or more) data structures at a time? ● e.g., Transferring money from one bank account to another. ● Simple approach: Use a separate lock for each. ● What happens if you have transfer from account A -> account B, at the same time as transfer from account B -> account A? ● Hmmmmm... tricky. ● We will get into this next time. Adapted from Matt Welsh’s (Harvard University) slides.

38 Next Lecture Higher level synchronization primitives: How do to fancier stuff than just locks Semaphores, monitors, and condition variables ● Implemented using basic locks as a primitive Allow applications to perform more complicated coordination schemes Adapted from Matt Welsh’s (Harvard University) slides.