Critical Sections with lots of Threads. Announcements CS 4411 project due yesterday, Wednesday, Sept 17 th CS 4410 Homework 2 available, due Tuesday,

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Ken Birman 1. Refresher: Dekers Algorithm Assumes two threads, numbered 0 and 1 CSEnter(int i) { int J = i^1; inside[i] = true; turn = J; while(inside[J]
Operating Systems Part III: Process Management (Process Synchronization)
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Race Conditions. Isolated & Non-Isolated Processes Isolated: Do not share state with other processes –The output of process is unaffected by run of other.
Race Conditions Critical Sections Deker’s Algorithm.
1 Tuesday, June 20, 2006 "The box said that I needed to have Windows 98 or better... so I installed Linux." - LinuxNewbie.org.
Critical Sections with lots of Threads. Announcements CS 414 Homework due today.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Race Conditions Critical Sections Dekker’s Algorithm.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
OS Spring’04 Concurrency Operating Systems Spring 2004.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Semaphores. Announcements No CS 415 Section this Friday Tom Roeder will hold office hours Homework 2 is due today.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
Synchronization (other solutions …). Announcements Assignment 2 is graded Project 1 is due today.
Synchronization..or: the trickiest bit of this course.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Synchronization Solutions
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Classical Synchronization Problems. Announcements CS 414 grades and solutions available in CMS soon. –Average 74.3 –High of 95. –Score out of 100 pts.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
Concurrency, Mutual Exclusion and Synchronization.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
Critical Problem Revisit. Critical Sections Mutual exclusion Only one process can be in the critical section at a time Without mutual exclusion, results.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Process Synchronization Concurrent access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Synchronization..or: the trickiest bit of this course.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Critical Sections with lots of Threads. Refresher: Deker’s Algorithm Assumes two threads, numbered 0 and 1 CSEnter(int i) { int J = i^1; inside[i] = true;
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
OS Winter’03 Concurrency. OS Winter’03 Bakery algorithm of Lamport  Critical section algorithm for any n>1  Each time a process is requesting an entry.
Chapter 6: Process Synchronization
Process Synchronization
Chapter 5: Process Synchronization
Process Synchronization: Semaphores
Background on the need for Synchronization
Chapter 5: Process Synchronization
Chapter 6-7: Process Synchronization
Chapter 5: Process Synchronization
Race Conditions Critical Sections Dekker’s Algorithm
Topic 6 (Textbook - Chapter 5) Process Synchronization
Synchronization and Semaphores
Grades.
Chapter 6: Process Synchronization
CSE 153 Design of Operating Systems Winter 19
Chapter 6: Synchronization Tools
Presentation transcript:

Critical Sections with lots of Threads

Announcements CS 4411 project due yesterday, Wednesday, Sept 17 th CS 4410 Homework 2 available, due Tuesday, Sept 23 rd

Review: Race conditions Definition: timing dependent error involving shared state –Whether it happens depends on how threads scheduled Hard to detect: –All possible schedules have to be safe Number of possible schedule permutations is huge Some bad schedules? Some that will work sometimes? –they are intermittent Timing dependent = small changes can hide bug

The Fundamental Issue: Atomicity Our atomic operation is not done atomically by machine –Atomic Unit: instruction sequence guaranteed to execute indivisibly –Also called “critical section” (CS)  When 2 processes want to execute their Critical Section, –One process finishes its CS before other is allowed to enter

Revisiting Race Conditions Process b: while(i > -10) i = i - 1; print “B won!”; Process a: while(i < 10) i = i +1; print “A won!”; – Who wins? – Will someone definitely win?

Critical Section Problem Problem: Design a protocol for processes to cooperate, such that only one process is in its critical section –How to make multiple instructions seem like one? Processes progress with non-zero speed, no assumption on clock speed Used extensively in operating systems: Queues, shared variables, interrupt handlers, etc. Process 1 Process 2 CS 1 Time  CS 2

Solution Structure Shared vars: Initialization: Process:... Entry Section Critical Section Exit Section Added to solve the CS problem

Solution Requirements Mutual Exclusion –Only one process can be in the critical section at any time Progress –Decision on who enters CS cannot be indefinitely postponed No deadlock Bounded Waiting –Bound on #times others can enter CS, while I am waiting No livelock Also efficient (no extra resources), fair, simple, …

Refresher: Dekker’s Algorithm Assumes two threads, numbered 0 and 1 CSEnter(int i) { inside[i] = true; while(inside[J]) { if (turn == J) { inside[i] = false; while(turn == J) continue; inside[i] = true; } }} CSExit(int i) { turn = J; inside[i] = false; }

Peterson’s Algorithm (1981) CSEnter(int i) { inside[i] = true; turn = J; while(inside[J] && turn == J) continue; } CSExit(int i) { inside[i] = false; } Simple is good!!

Napkin analysis of Peterson’s algorithm: Safety (by contradiction): –Assume that both processes (Alan and Shay) are in their critical section (and thus have their inside flags set). Since only one, say Alan, can have the turn, the other (Shay) must have reached the while() test before Alan set his inside flag. –However, after setting his inside flag, Alan gave away the turn to Shay. Shay has already changed the turn and cannot change it again, contradicting our assumption. Liveness & Bounded waiting => the turn variable.

Can we generalize to many threads? Obvious approach won’t work: Issue: Who’s turn next? CSEnter(int i) { inside[i] = true; for(J = 0; J < N; J++) while(inside[J] && turn == J) continue; } CSExit(int i) { inside[i] = false; }

Bakery “concept” Described by Leslie Lamport Think of a popular store with a crowded counter, perhaps the pastry shop in Montreal’s fancy market –People take a ticket from a machine –If nobody is waiting, tickets don’t matter –When several people are waiting, ticket order determines order in which they can make purchases

Bakery Algorithm: “Take 1” int ticket[n]; int next_ticket; CSEnter(int i) { ticket[i] = ++next_ticket; for(J = 0; J < N; J++) while(ticket[J] && ticket[J] < ticket[i]) continue; } CSExit(int i) { ticket[i] = 0; } Oops… access to next_ticket is a problem!

Bakery Algorithm: “Take 2” int ticket[n]; CSEnter(int i) { ticket[i] = max(ticket[0], … ticket[N-1])+1; for(J = 0; J < N; J++) while(ticket[J] && ticket[j] < ticket[i]) continue; } CSExit(int i) { ticket[i] = 0; } Clever idea: just add one to the max. Just add 1 to the max! Oops… two could pick the same value!

Bakery Algorithm: “Take 3” If i, j pick same ticket value, id’s break tie: (ticket[J] < ticket[i]) || (ticket[J]==ticket[i] && J<i) Notation: (B,J) < (A,i) to simplify the code: (B<A || (B==A && J<i)), e.g.: (ticket[J],J) < (ticket[i],i)

Bakery Algorithm: “Take 4” int ticket[N]; boolean picking[N] = false; CSEnter(int i) { ticket[i] = max(ticket[0], … ticket[N-1])+1; for(J = 0; J < N; J++) while(ticket[J] && (ticket[J],J) < (ticket[i],i)) continue; } CSExit(int i) { ticket[i] = 0; } Oops… i could look at J when J is still storing its ticket, and yet J could have a lower id than me (i)!

Bakery Algorithm: Almost final int ticket[N]; boolean choosing[N] = false; CSEnter(int i) { choosing[i] = true; ticket[i] = max(ticket[0], … ticket[N-1])+1; choosing[i] = false; for(J = 0; J < N; J++) { while(choosing[J]) continue; while(ticket[J] && (ticket[J],J) < (ticket[i],i)) continue; } CSExit(int i) { ticket[i] = 0; }

Bakery Algorithm: Issues? What if we don’t know how many threads might be running? –The algorithm depends on having an agreed upon value for N –Somehow would need a way to adjust N when a thread is created or one goes away Also, technically speaking, ticket can overflow! –Solution: Change code so that if ticket is “too big”, set it back to zero and try again.

Bakery Algorithm: Final int ticket[N]; /* Important: Disable thread scheduling when changing N */ boolean choosing[N] = false; CSEnter(int i) { do { ticket[i] = 0; choosing[i] = true; ticket[i] = max(ticket[0], … ticket[N-1])+1; choosing[i] = false; } while(ticket[i] >= MAXIMUM); for(J = 0; J < N; J++) { while(choosing[J]) continue; while(ticket[J] && (ticket[J],J) < (ticket[i],i)) continue; } CSExit(int i) { ticket[i] = 0; }

How do real systems do it? Some real systems actually use algorithms such as the bakery algorithm –A good choice where busy-waiting isn’t going to be super-inefficient –For example, if you have enough CPUs so each thread has a CPU of its own Some systems disable interrupts briefly when calling CSEnter() and CSExit() Some use hardware “help”: atomic instructions

Critical Sections with Atomic Hardware Primitives Process i While(test_and_set(&lock)); Critical Section lock = false; Share: int lock; Initialize: lock = false; Problem: Does not satisfy liveness (bounded waiting) (see book for correct solution, Figure 6.8) Assumes that test_and_set is compiled to a special hardware instruction that sets the lock and returns the OLD value (true: locked; false: unlocked)

test_and_set Instruction Definition: boolean test_and_set (boolean *target) { boolean rv = *target; *target = TRUE; return rv: }

Solution using TestAndSet Shared boolean variable lock., initialized to false. Solution: while (true) { while ( TestAndSet (&lock )) ; /* do nothing // critical section lock = FALSE; // remainder section }

Swap Instruction Definition: void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: }

Solution using Swap Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key. Solution: while (true) { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section lock = FALSE; // remainder section }

Presenting critical sections to users CSEnter and CSExit are possibilities But more commonly, operating systems have offered a kind of locking primitive We call these semaphores

Semaphores Non-negative integer with atomic increment and decrement Integer ‘S’ that (besides init) can only be modified by: –P(S) or S.wait(): decrement or block if already 0 –V(S) or S.signal(): increment and wake up process if any These operations are atomic (indivisible) semaphore S; P(S) { while(S ≤ 0) ; S--; } V(S) { S++; } Some systems use the operation wait() instead of P() These systems use the operation signal() instead of V()

Semaphore Types Counting Semaphores: –Any integer –Used for synchronization Binary Semaphores –Value is limited to 0 or 1 –Used for mutual exclusion (mutex) Shared: semaphore S Init: S = 1; Process i P(S); Critical Section V(S);

Semaphore Implementation Must guarantee that no two processes can execute P () and V () on the same semaphore at the same time –No process may be interrupted in the middle of these operations Thus, implementation becomes the critical section problem where the P and V code are placed in the critical section. –Could now have busy waiting in critical section implementation But implementation code is short Little busy waiting if critical section rarely occupied Note that applications may spend lots of time in critical sections and therefore this is not a good solution.

Semaphore Implementation with no Busy waiting With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: – value (of type integer) – pointer to next record in the list Two operations: –block – place the process invoking the operation on the appropriate waiting queue. –wakeup – remove one of processes in the waiting queue and place it in the ready queue.

Implementing Semaphores Busy waiting (spinlocks) Consumes CPU resources No context switch overhead Alternative: Blocking Should spin or block? –Less time  spin –More time  block –A theory result: Spin for as long as block cost If lock not available, then block Shown factor of 2-optimal! typedef struct semaphore { int value: ProcessList L; } Semaphore; void P(Semaphore *S) { S->value = S->value - 1; if (S.value < 0) { add this process to S.L; block(); } void V(Semaphore *S) { S->value = S->value + 1; if (S->value <= 0) { remove process P from S.L; wakeup P }

Implementing Semaphores Per-semaphore list of processes –Implemented using PCB link field –Queuing Strategy: FIFO works fine Will LIFO work?

Common programming errors Process i P(S) CS P(S) Process j V(S) CS V(S) Process k P(S) CS A typo. Process I will get stuck (forever) the second time it does the P() operation. Moreover, every other process will freeze up too when trying to enter the critical section! A typo. Process J won’t respect mutual exclusion even if the other processes follow the rules correctly. Worse still, once we’ve done two “extra” V() operations this way, other processes might get into the CS inappropriately! Whoever next calls P() will freeze up. The bug might be confusing because that other process could be perfectly correct code, yet that’s the one you’ll see hung when you use the debugger to look at its state!

More common mistakes Conditional code that can break the normal top-to-bottom flow of code in the critical section Often a result of someone trying to maintain a program, e.g. to fix a bug or add functionality in code written by someone else P(S) if(something or other) return; CS V(S)

What if buffer is full? Producer do {... // produce an item in nextp... P(mutex); P(empty);... // add nextp to buffer... V(mutex); V(full); } while (true); What’s wrong? Shared: Semaphores mutex, empty, full; Init: mutex = 1; /* for mutual exclusion*/ empty = N; /* number empty bufs */ full = 0; /* number full bufs */ Consumer do { P(full); P(mutex);... // remove item to nextc... V(mutex); V(empty);... // consume item in nextc... } while (true); Oops! Even if you do the correct operations, the order in which you do semaphore operations can have an incredible impact on correctness

In Summary… Fundamental Issue –Programmers atomic operation is not done atomically –Atomic Unit: instruction sequence guaranteed to execute indivisibly –Also called “critical section” (CS) Critical Section Implementation –Software: Dekker’s, Peterson’s, Baker’s algorithm –Hardware: test_and_set, swap Hard for programmers to use –Operating System: semaphores Implementing Semaphores –Multithread synchronization algorithms shown earlier –Could have a thread disable interrupts, put itself on a “wait queue”, then context switch to some other thread (an “idle thread” if needed) –The O/S designer makes these decisions and the end user shouldn’t need to know