Shared Memory Coordination We will be looking at process coordination using shared memory and busy waiting. –So we don't send messages but read and write.

Slides:



Advertisements
Similar presentations
Mutual Exclusion – SW & HW By Oded Regev. Outline: Short review on the Bakery algorithm Short review on the Bakery algorithm Black & White Algorithm Black.
Advertisements

Operating Systems Part III: Process Management (Process Synchronization)
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CSC321 Concurrent Programming: §3 The Mutual Exclusion Problem 1 Section 3 The Mutual Exclusion Problem.
Operating Systems: Sync
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Chapter 3 The Critical Section Problem
Concurrency.
1 Synchronization Coordinating the Activity of Mostly Independent Entities.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
OS Spring’04 Concurrency Operating Systems Spring 2004.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
1 Chapter 6: Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2007 Chapter 6 of textbook.
Synchronization (other solutions …). Announcements Assignment 2 is graded Project 1 is due today.
The Structure of the “THE” -Multiprogramming System Edsger W. Dijkstra Jimmy Pierce.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
1 Lecture : Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2008.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
1 Thread Synchronization: Too Much Milk. 2 Implementing Critical Sections in Software Hard The following example will demonstrate the difficulty of providing.
The Critical Section Problem
Concurrency, Mutual Exclusion and Synchronization.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Page 1 Mutual Exclusion & Election Algorithms Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content.
Operating Systems CMPSC 473 Signals, Introduction to mutual exclusion September 28, Lecture 9 Instructor: Bhuvan Urgaonkar.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization.
CIS 620 Advanced Operating Systems Lecture 8 – Synchronization Prof. Timothy Arndt BU 331.
Distributed Mutual Exclusion Synchronization in Distributed Systems Synchronization in distributed systems are often more difficult compared to synchronization.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Process Synchronization Presentation 2 Group A4: Sean Hudson, Syeda Taib, Manasi Kapadia.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
OS Winter’03 Concurrency. OS Winter’03 Bakery algorithm of Lamport  Critical section algorithm for any n>1  Each time a process is requesting an entry.
CS703 – Advanced Operating Systems
Process Synchronization
Background on the need for Synchronization
Chapter 5: Process Synchronization
Lecture 2 Part 2 Process Synchronization
Implementing Mutual Exclusion
CSE 153 Design of Operating Systems Winter 19
Chapter 6: Synchronization Tools
Process/Thread Synchronization (Part 2)
Presentation transcript:

Shared Memory Coordination We will be looking at process coordination using shared memory and busy waiting. –So we don't send messages but read and write shared variables. –When we need to wait, we loop and don't context switch. –This can be wasteful of resources if we must wait a long time.

Shared Memory Coordination –Context switching primitives normally use busy waiting in their implementation. Mutual Exclusion –Consider adding one to a shared variable V. –When compiled onto many machines get three instructions load r1  V add r1  r1+1 store r1  V

Mutual Exclusion –Assume V is initially 10 and one process begins the 3 instruction sequence. After the first instruction context switch to another process. –Registers are of course saved. New process does all three instructions. Context switch back. –Registers are of course restored. First process finishes. V has been incremented twice but has only reached 11.

Mutual Exclusion –The problem is that the 3 instruction sequence must be atomic, i.e. cannot be interleaved with another execution of these instructions –That is, one execution excludes the possibility of another. So they must exclude each other, i.e. we must have mutual exclusion. –This was a race condition. Hard bugs to find since non-deterministic. –Can in general involve more than two processes

Mutual Exclusion –The portion of code that requires mutual exclusion is often called a critical section. One approach is to prevent context switching. –We can do this for the kernel of a uniprocessor. Mask interrupts. –Not feasible for user mode processes. –Not feasible for multiprocessors.

Mutual Exclusion –Critical Section Problem is to implement: loop trying-part critical-section releasing-part non-critical section end loop –So that when many processes execute this you never have more than one in the critical section. –That is you must write the trying-part and the releasing- part.

Mutual Exclusion –Trivial solution. Let releasing-part be simply "halt”. –This shows we need to specify the problem better. –Additional requirement: Assume that if a process begins execution of its critical section and no other process enters the critical section, then the first process will eventually exit the critical section.

Mutual Exclusion Then the requirement is "If a process is executing its trying part, then some process will eventually enter the critical section". Software-only solutions to CS problem. –We assume the existence of atomic loads and stores. Only up to word length (i.e. not a whole page). –We start with the case of two processes. –Easy if we want tasks to alternate in CS and you know which one goes first in CS.

Mutual Exclusion shared int turn = 1 loop while (turn=2) while (turn=1)CS turn=2turn=1NCS

Mutual Exclusion –But always alternating does not satisfy the additional requirement above. –Let NCS for process 1 be an infinite loop (or a halt). We will get to a point when process 2 is in its trying part but turn=1 and turn will not change. So some process enters its trying part but neither process will enter the CS.

Mutual Exclusion The first solution that worked was discovered by a mathematician named Dekker. –Now we will use turn only to resolve disputes.

Dekker’s Algorithm /* Variables are global and shared */ for (; ;) { // process 1 - an infinite loop to show it enters // CS more than once. Turn is initially 1. p1wants = 1; while (p2wants == 1) { if (turn == 2) { p1wants = 0; while (turn == 2) {/* Empty loop */} p1wants = 1; } critical_section(); turn = 2; p1wants = 0; noncritical_section(); }

Dekker’s Algorithm /* Variables are global and shared */ for (; ;) { // process 2 - an infinite loop to show it enters // CS more than once. Turn is initially 1. p2wants = 1; while (p1wants == 1) { if (turn == 1) { p2wants = 0; while (turn == 1) {/* Empty loop */} p2wants = 1; } critical_section(); turn = 1; p2wants = 0; noncritical_section(); }

Mutual Exclusion –The winner-to-be just loops waiting for the loser to give up and then goes into the CS. –The loser-to-be: Gives up. Waits to see that the winner has finished. Starts over (knowing it will win). –Dijkstra extended Dekker's solution for > 2 processes. –Others improved the fairness of Dijkstra's algorithm.

Mutual Exclusion –These complicated methods remained the simplest known until 1981 when Peterson found a much simpler method. –Keep Dekker's idea of using turn only to resolve disputes, but drop the complicated then body of the if.

Peterson’s Algorithm /* Variables are global and shared */ for (; ;) { // process 1 - an infinite loop to show it enters // CS more than once. p1wants = 1; turn = 2; while (p2wants && turn == 2) {/* empty loop */} critical_section(); p1wants = 0; noncritical_section(); }

Peterson’s Algorithm /* Variables are global and shared */ for (; ;) { // process 2 - an infinite loop to show it enters // CS more than once. p2wants = 1; turn = 1; while (p1wants && turn == 1) {/* empty loop */} critical_section(); p2wants = 0; noncritical_section(); }

Semaphores –Trying and release often called entry and exit, or wait and signal, or down and up, or P and V (the latter are from Dutch words since Dijkstra is Dutch). –Let’s try to formalize the entry and exit parts. –To get mutual exclusion we need to ensure that no more than one task can pass through P until a V has occurred. The idea is to keep trying to walk through the gate and when you succeed atomically close the gate behind you so that no one else can enter.

Semaphores –Definition (not an implementation): Let S be an enumerated type with values closed and open (like a gate). P(S) is while S = closed S  closed The failed test and the assignment are a single atomic action.

Semaphores P(S) is label: {[ --begin atomic part if S = open S  closed else }] --end atomic part goto label V(S) is S  open

Semaphores –Note that this P and V (not yet implemented) can be used to solve the critical section problem very easily. The entry part is P(S). The exit part is V(S). –Note that Dekker and Peterson do not give us a P and V since each process has a unique entry and a unique exit. –S is called a (binary) semaphore.

Semaphores –To implement binary semaphores we need some help from our hardware friends. Boolean in out X TestAndSet(X) is oldx  X X  true return oldx –Note that the name is a good one. This function tests the value of X and sets it (i.e. sets it true; reset is to set false).

Semaphores –Now P/V for binary semaphores is trivial. S is Boolean variable (false is open, true is closed). P(S) is while (TestAndSet(S)) V(S) is S  false –This works fine no matter how many processes are involved.

Counting Semaphores –Now want to consider permitting a bounded number of processors into what might be called a semi-critical section. loop P(S) SCS // at most k processes can be here // simultaneously V(S) NCS –A semaphore S with this property is called a counting semaphore.

Counting Semaphores –If k=1, we get a binary semaphore so counting semaphore generalizes to binary semaphore. –How can we implement a counting semaphore given binary semaphores? S is a nonnegative integer. Initialize S to k, the max number allowed in SCS. Use k=1 to get binary semaphore (hence the name binary). We only ask for: –Limit of k in SCS (analogue of mutual exclusion). –Progress: If process enters P and < k in SCS, a process will enter the SCS.

Counting Semaphores We do not ask for fairness, and don't assume it (for the binary semaphore) either. binary semaphore q // initially open binary semaphore r // initially closed integer NS; // might be negative, keeps value of S P(S) is V(S) is P(q) NS-- NS++ if NS < 0 if NS <= 0 V(q) V(r) P(r) else V(q)

Mutual Exclusion –Try to do mutual exclusion without shared memory. –Centralized approach Pick a process as a coordinator (mutual-exclusion- server) To get access to Critical Section send message to coordinator and await reply. When you leave CS send message to coordinator. When coordinator gets a message requesting CS it –Replies if the CS is free –Enter requesters name into waiting queue

Mutual Exclusion When coordinator gets a message announcing departure from CS –Removes head entry from list of waiters and replies to it The simplest solution and perhaps the best –Distributed solution When you want to get into CS Send request message to everyone (except yourself) –Include timestamp (logical clock!) Wait until you receive OK from everyone When receive request...

Mutual Exclusion If you are not in CS and don't want to be, say OK If you are in CS, put requester's name on list If you are not in CS but want to be, –If your TS is lower, put name on list –If your TS is higher, send OK When you leave CS, send OK to all on your list Token Passing solution Form logical ring Pass token around ring When you have the token you can enter CS (hold on to the token until you exit)

Mutual Exclusion Comparison –Centralized is best –Distributed of theoretical interest –Token passing good if hardware is ring based (e.g. a token ring LAN)