CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
CSE 451: Operating Systems Winter 2012 Synchronization Mark Zbikowski Gary Kimura.
CSE 451: Operating Systems Spring 2012 Module 7 Synchronization Ed Lazowska Allen Center 570.
Synchronization. Shared Memory Thread Synchronization Threads cooperate in multithreaded environments – User threads and kernel threads – Share resources.
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
CS444/CS544 Operating Systems Synchronization 2/16/2006 Prof. Searleman
1 Synchronization Coordinating the Activity of Mostly Independent Entities.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
1 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
IT 344: Operating Systems Module 6 Synchronization Chia-Chi Teng CTB 265.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
CSE 153 Design of Operating Systems Winter 2015 Lecture 5: Synchronization.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
CE Operating Systems Lecture 8 Process Scheduling continued and an introduction to process synchronisation.
13/03/07Week 21 CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
CSE 451: Operating Systems Spring 2010 Module 7 Synchronization John Zahorjan Allen Center 534.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CSE 120 Principles of Operating
CS703 – Advanced Operating Systems
Background on the need for Synchronization
Synchronization.
Synchronization.
Chapter 5: Process Synchronization
143a discussion session week 3
Topic 6 (Textbook - Chapter 5) Process Synchronization
Lecture 2 Part 2 Process Synchronization
Synchronization, Monitors, Deadlocks
CENG334 Introduction to Operating Systems
CSCI1600: Embedded and Real Time Software
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2004 Module 6 Synchronization
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2004 Module 6 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CSE 153 Design of Operating Systems Winter 2019
Chapter 6: Synchronization Tools
CSE 451: Operating Systems Spring 2008 Module 7 Synchronization
CSE 451: Operating Systems Winter 2007 Module 6 Synchronization
CSCI1600: Embedded and Real Time Software
CSE 451: Operating Systems Autumn 2009 Module 7 Synchronization
CSE 153 Design of Operating Systems Winter 2019
CSE 451: Operating Systems Spring 2007 Module 7 Synchronization
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization

Administrivia l Homework 1 u Due today by the end of day l Hopefully you have started on project 1 by now? u Kernel-level threads (preemptable scheduling) u Be prepared with questions for this weeks Lab u Read up on scheduling and synchronization in textbook and start early! u Need to read and understand a lot of code before starting to code u Students typically find this to be the hardest of the three projects 2

Aside: PintOS Threads 3 l Looked at kernel code “threads/thread.c” l The code executes in kernel space (Ring 0) l In project 1, test cases run in Ring 0 as well. They create kernel-level threads directly from the kernel.

Cooperation between Threads l Threads cooperate in multithreaded programs u To share resources, access shared data structures »Threads accessing a memory cache in a Web server u To coordinate their execution »One thread executes relative to another 4

5 Threads: Sharing Data int num_connections = 0; web_server() { while (1) { int sock = accept(); thread_fork(handle_request, sock); } handle_request(int sock) { ++num_connections; Process request close(sock); }

6 Threads: Cooperation l Threads voluntarily give up the CPU with thread_yield while (1) { printf(“ping\n”); thread_yield(); } while (1) { printf(“pong\n”); thread_yield(); } Ping ThreadPong Thread

7 Synchronization l For correctness, we need to control this cooperation u Threads interleave executions arbitrarily and at different rates u Scheduling is not under program control l We control cooperation using synchronization u Synchronization enables us to restrict the possible inter- leavings of thread executions l Discussed in terms of threads, but also applies to processes

8 Shared Resources We initially focus on coordinating access to shared resources l Basic problem u If two concurrent threads are accessing a shared variable, and that variable is read/modified/written by those threads, then access to the variable must be controlled to avoid erroneous behavior l Over the next couple of lectures, we will look at u Mechanisms to control access to shared resources »Locks, mutexes, semaphores, monitors, condition variables, etc. u Patterns for coordinating accesses to shared resources »Bounded buffer, producer-consumer, etc.

9 Classic Example l Suppose we have to implement a function to handle withdrawals from a bank account: withdraw (account, amount) { balance = get_balance(account); balance = balance – amount; put_balance(account, balance); return balance; } l Now suppose that you and your father share a bank account with a balance of $1000 l Then you each go to separate ATM machines and simultaneously withdraw $100 from the account

10 Example Continued l We’ll represent the situation by creating a separate thread for each person to do the withdrawals l These threads run on the same bank machine: l What’s the problem with this implementation? u What resource is shared? u Think about potential schedules of these two threads withdraw (account, amount) { balance = get_balance(account); balance = balance – amount; put_balance(account, balance); return balance; } withdraw (account, amount) { balance = get_balance(account); balance = balance – amount; put_balance(account, balance); return balance; } int balance[MAX_ACCOUNT_ID]; // shared across threads

11 Interleaved Schedules l The problem is that the execution of the two threads can be interleaved: l What is the balance of the account now? balance = get_balance(account); balance = balance – amount; balance = get_balance(account); balance = balance – amount; put_balance(account, balance); Execution sequence seen by CPU Context switch

12 Shared Resources l Problem: two threads accessed a shared resource u Known as a race condition (memorize this buzzword) l Need mechanisms to control this access u So we can reason about how the program will operate l Our example was updating a shared bank account l Also necessary for synchronizing access to any shared data structure u Buffers, queues, lists, hash tables, etc.

13 When Are Resources Shared? l Local variables? u Not shared: refer to data on the stack u Each thread has its own stack u Never pass/share/store a pointer to a local variable on the stack for thread T1 to another thread T2 l Global variables and static objects? u Shared: in static data segment, accessible by all threads l Dynamic objects and other heap objects? u Shared: Allocated from heap with malloc/free or new/delete Stack (T1) Code Static Data Heap Stack (T2) Stack (T3) Thread 3 Thread 2 PC (T1) PC (T3) PC (T2) Thread 1

14 How Interleaved Can It Get? get_balance(account); put_balance(account, balance); balance = balance – amount; balance = get_balance(account); balance = How contorted can the interleavings be? l We'll assume that the only atomic operations are reads and writes of individual memory locations u Some architectures don't even give you that! l We'll assume that a context switch can occur at any time l We'll assume that you can delay a thread as long as you like as long as it's not delayed forever

15 Mutual Exclusion l Mutual exclusion to synchronize access to shared resources u This allows us to have larger atomic blocks u What does atomic mean? l Code that uses mutual called a critical section u Only one thread at a time can execute in the critical section u All other threads are forced to wait on entry u When a thread leaves a critical section, another can enter u Example: sharing an ATM with others l What requirements would you place on a critical section?

16 Critical Section Requirements Critical sections have the following requirements: 1) Mutual exclusion (mutex) u If one thread is in the critical section, then no other is 2) Progress u A thread in the critical section will eventually leave the critical section u If some thread T is not in the critical section, then T cannot prevent some other thread S from entering the critical section 3) Bounded waiting (no starvation) u If some thread T is waiting on the critical section, then T will eventually enter the critical section 4) Performance u The overhead of entering and exiting the critical section is small with respect to the work being done within it

17 About Requirements There are three kinds of requirements that we'll use l Safety property: nothing bad happens u Mutex l Liveness property: something good happens u Progress, Bounded Waiting l Performance requirement u Performance l Properties hold for each run, while performance depends on all the runs u Rule of thumb: When designing a concurrent algorithm, worry about safety first (but don't forget liveness!).

18 Mechanisms For Building Critical Sections l Atomic read/write u Can it be done? l Locks u Primitive, minimal semantics, used to build others l Semaphores u Basic, easy to get the hang of, but hard to program with l Monitors u High-level, requires language support, operations implicit

19 Mutual Exclusion with Atomic Read/Writes: First Try while (true) { while (turn != 1) ; critical section turn = 2; outside of critical section } while (true) { while (turn != 2) ; critical section turn = 1; outside of critical section } int turn = 1; This is called alternation It satisfies mutex: If blue is in the critical section, then turn == 1 and if yellow is in the critical section then turn == 2 (turn == 1) ≡ (turn != 2) Is there anything wrong with this solution?

20 Mutual Exclusion with Atomic Read/Writes: First Try while (true) { while (turn != 1) ; critical section turn = 2; outside of critical section } while (true) { while (turn != 2) ; critical section turn = 1; outside of critical section } int turn = 1; This is called alternation It satisfies mutex: If blue is in the critical section, then turn == 1 and if yellow is in the critical section then turn == 2 (turn == 1) ≡ (turn != 2) It violates progress: blue thread could go into an infinite loop outside of the critical section, which will prevent the yellow one from entering

21 Mutex with Atomic R/W: Peterson's Algorithm while (true) { try1 = true; turn = 2; while (try2 && turn != 1) ; critical section try1 = false; outside of critical section } while (true) { try2 = true; turn = 1; while (try1 && turn != 2) ; critical section try2 = false; outside of critical section } int turn = 1; bool try1 = false, try2 = false; This satisfies all the requirements Here's why...

22 Mutex with Atomic R/W: Peterson's Algorithm while (true) { {¬ try1 ∧ (turn == 1 ∨ turn == 2) } 1 try1 = true; { try1 ∧ (turn == 1 ∨ turn == 2) } 2 turn = 2; { try1 ∧ (turn == 1 ∨ turn == 2) } 3 while (try2 && turn != 1) ; { try1 ∧ (turn == 1 ∨ ¬ try2 ∨ (try2 ∧ (yellow at 6 or at 7)) } critical section 4 try1 = false; {¬ try1 ∧ (turn == 1 ∨ turn == 2) } outside of critical section } while (true) { {¬ try2 ∧ (turn == 1 ∨ turn == 2) } 5 try2 = true; { try2 ∧ (turn == 1 ∨ turn == 2) } 6 turn = 1; { try2 ∧ (turn == 1 ∨ turn == 2) } 7 while (try1 && turn != 2) ; { try2 ∧ (turn == 2 ∨ ¬ try1 ∨ (try1 ∧ (blue at 2 or at 3)) } critical section 8 try2 = false; {¬ try2 ∧ (turn == 1 ∨ turn == 2) } outside of critical section } int turn = 1; bool try1 = false, try2 = false; (blue at 4) ∧ try1 ∧ (turn == 1 ∨ ¬ try2 ∨ (try2 ∧ (yellow at 6 or at 7)) ∧ (yellow at 8) ∧ try2 ∧ (turn == 2 ∨ ¬ try1 ∨ (try1 ∧ (blue at 2 or at 3))... ⇒ (turn == 1 ∧ turn == 2)

Peterson’s Algorithm 23 l Hard to reason l Simpler ways to implement Mutex exists (see later)

24 Locks l A lock is an object in memory providing two operations u acquire(): before entering the critical section u release(): after leaving a critical section l Threads pair calls to acquire() and release() u Between acquire()/release(), the thread holds the lock u acquire() does not return until any previous holder releases u What can happen if the calls are not paired?

25 Using Locks u Why is the “return” outside the critical section? Is this ok? u What happens when a third thread calls acquire? withdraw (account, amount) { acquire(lock); balance = get_balance(account); balance = balance – amount; put_balance(account, balance); release(lock); return balance; } acquire(lock); balance = get_balance(account); balance = balance – amount; balance = get_balance(account); balance = balance – amount; put_balance(account, balance); release(lock); acquire(lock); put_balance(account, balance); release(lock); Critical Section

26 Next time… l Semaphores and monitors u Read Chapter 5.2 – 5.7

27 Semaphores l Semaphores are data structures that also provide mutual exclusion to critical sections u Block waiters, interrupts enabled within CS u Described by Dijkstra in THE system in 1968 l Semaphores count the number of threads using a critical section (more later) l Semaphores support two operations: u wait(semaphore): decrement, block until semaphore is open »Also P() after the Dutch word for test u signal(semaphore): increment, allow another thread to enter »Also V() after the Dutch word for increment

28 Blocking in Semaphores l Associated with each semaphore is a queue of waiting processes l When wait() is called by a thread: u If semaphore is open, thread continues u If semaphore is closed, thread blocks on queue l signal() opens the semaphore: u If a thread is waiting on the queue, the thread is unblocked u If no threads are waiting on the queue, the signal is remembered for the next thread »In other words, signal() has “history” »This history uses a counter

29 Using Semaphores l Use is similar to locks, but semantics are different struct account { double balance; semaphore S; } withdraw (account, amount) { wait(account->S); tmp = account->balance; tmp = tmp - amount; account->balance = tmp; signal(account->S); return tmp; } wait(account->S); tmp = account->balance; tmp = tmp – amount; wait(account->S); account->balance = tmp; signal(account->S); wait(account->S); … signal(account->S); … signal(account->S); Threads block It is undefined which thread runs after a signal