Synchronicity Introduction to Operating Systems: Module 5.

Slides:



Advertisements
Similar presentations
Operating Systems Part III: Process Management (Process Synchronization)
Advertisements

1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
The Critical-Section Problem
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Tuesday, June 20, 2006 "The box said that I needed to have Windows 98 or better... so I installed Linux." - LinuxNewbie.org.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
1 Chapter 6: Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2007 Chapter 6 of textbook.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
1 Lecture : Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2008.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
The Critical Section Problem
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Principles of Operating Systems Lecture 6 and 7 - Process Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Synchronization Background The Critical-Section.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
CY2003 Computer Systems Lecture 04 Interprocess Communication.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Process Synchronization Concurrent access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the.
1 Concurrent Processes. 2 Cooperating Processes  Operating systems allow for the creation and concurrent execution of multiple processes  concurrency.
Chapter 6: Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware.
1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization Advanced Operating System Fall 2012.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
CS4315A. Berrached:CMS:UHD1 Process Synchronization Chapter 8.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
CE Operating Systems Lecture 8 Process Scheduling continued and an introduction to process synchronisation.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Chapter 5: Process Synchronization
Background on the need for Synchronization
Chapter 5: Process Synchronization
Chapter 6: Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
The Critical-Section Problem
Module 7a: Classic Synchronization
Critical section problem
Grades.
Chapter 6: Process Synchronization
Chapter 6: Synchronization Tools
CSE 542: Operating Systems
Presentation transcript:

Synchronicity Introduction to Operating Systems: Module 5

Process (and thread) cooperation u Why processes cooperate  Modularity  breaking up a system into several sub-systems Example: an interrupt handler and the appropriate device driver which need to communicate  Convenience  users might want several processes to share data  Speedup  a single program is run as several processes sharing information

Communication abstraction u Developed to reason about communication u Producers and Consumers  Producer produces a piece of information into a buffer  Consumer uses it, removing it u Typical of system “principles”  developed to deal with  general “phenomena”  ease of arguing correctness formally

Bounded buffer u Given a shared array of items, how to allow one process to write to the buffer without interfering with a second reader process  Must generalize to n readers, m writers u Implementation given below  buffer: array of size N  buffer pointers nextin and nextout  empty buffer: nextin == nextout  capacity of buffer  N-1 items u Question: how to store N items?

A solution for bounded buffers Shared (initialization) itemType nextin = nextout = 1; itemType buffer[n]; Producer repeat{ produce an item in temp; while ((nextin +1) mod n == nextout); buffer[nextin] = temp; nextin = (nextin + 1) mod n; }until false;

A solution for bounded buffers Consumer repeat{ while (nextin == nextout); tempout = buffer[nextout]; nextout = (nextout + 1) mod n; consume the item in tempout; } until false;

Using counters Shared (Initialization) int counter = 0; int nextin = 0, nextout = 0; Producer repeat{ produce an item in tempin; while (counter == n); buffer[nextin] = tempin; nextin = (nextin + 1) mod n; counter = counter +1; } until false;

Using counters Consumer repeat{ while (counter == 0); tempout = buffer[nextout]; nextout = (nextout + 1) mod n; counter = counter - 1; consume the item in tempout; }until false;

What is wrong? u Note the variable “counter” and the statements counter = counter +1; counter = counter -1; u The producer and consumer can be executing asynchronously due to multiprogramming, and can be interrupted while executing above code u These are two independent code streams  they can be interleaved

Increment/decrement  Each of increment and decrement are actually implemented as a series of machine instructions on the underlying hardware platform (processor) PRODUCER INCREMENTS register1 := counter register1 := register1+1 counter := register1 CONSUMER DECREMENTS register2 := counter register2 := register2-1 counter := register2

An interleaving u Consider counter = 5; a producer is followed by a consumer u Would expect counter = 5 at end u However, with the interleaving P1: register1 := counter P2: register1 := register1+1 C1: register2 := counter C2: register2 := register2-1 P3: counter := register1 C3: counter := register2 u counter has a value of 6 after P3 but a value of 4 after C3

The problem: race condition u The problem occurs because increment and decrement are not atomic u The ambiguity of the code containing these operations is sometimes referred to as creating a race condition

Atomic operations u Two or more operations are executed atomically if the result of their execution is equivalent to that of some serial order of execution u Operations which are always executed atomically are called atomic  read or write a word from/to memory  bit-wise or of the contents of 2 registers

Solution: mutual exclusion u At a high level  The producer and consumer processes  need to synchronize so that they do not access shared variables at the same time  This is called mutual exclusion  the shared and critical variables can be accessed one process at a time  Access must be serialized  even if the processes attempt concurrent access as in the previous example

Critical-sections u The portion of a program which requires exclusive access to some shared resource is called a critical section u Critical sections are written as  Entry section  Setting up mutual exclusion  “locking the door”  Critical section code  Work with shared resources  Exit section  “unlock the door”

Critical section implementation  The entry section controls access to make sure no more than one process P i gets to access the critical section at any given time  acts as a guard  The exit section does bookkeeping to make sure that other processes that are waiting know that Pi has exited  We will see two examples via flags for realizing mutual exclusion on critical sections between a pair of processes

Mutual exclusion via flags u The algorithm uses a Boolean array Flag  shared data Flag[0] = Flag[1] = false;  P i executes Flag[i] = true; While (Flag[j]); CRITICAL SECTION Flag[i] = false;  and analogous for P j

Mutual exclusion via turns u The algorithm uses an integer turn  shared data turn = 0;  P i executes while (turn==j); CRITICAL SECTION turn = j;  and analogous for P j

Criteria for correctness 1. Only one process at a time is allowed in the critical section for a resource 2. A process that halts in its non-critical section must do so without interfering with other processes 3. No deadlock or starvation

Criteria for correctness 4. A process must not be delayed access to a critical section when there is no other process using it 5. No assumptions are made about relative process speeds or number of processes 6. A process remains inside its critical section for a finite time only

Turn counter & flags u While providing mutual exclusion, neither approach guarantees correctness u Turn counter  one process terminates, never enters critical section  violates [2] u Flag  both flags could be set to true  violates [4]

Petersen’s algorithm u combines the previous two ideas u preserving all conditions flag[i] = true; turn = j; while ( flag[j] and ( turn == j )); flag[i] = false;

Bakery algorithm  The processes ask for a ticket from an agent and get an integer valued ticket  They then wait till all processes with smaller ticket values have finished going through the critical region  There can be ties in which case, PIDs are used to break them by letting the process with the smaller PID go first  Leads to a FCFS prioritizing strategy  The algorithm is akin to taking a ticket and waiting for a turn in a bakery and is called the bakery algorithm

Getting a ticket  We will use the function max to get the next ticket  1 + max (other tickets)  Breaking ties  Lexicographic ordering of pairs of integers  Given integers a, b, c, d, the pair (a,b) < (c, d) if and only (a < c) or (a = c and b < d)

Implementing the bakery algorithm  We use two data structures, arrays of size n  choosing: a Boolean array initialized to false  ticket: an array of integers initialized to zero  Process Pi executes  get_ticket(i)  entry-section  critical section  exit-section ticket[i] = 0 //0 is used to denote no ticket

Getting a ticket  Process Pi declares that it wants to choose a ticket by setting choosing[i] to be true  It assigns ticket[i] a value that is one more than max of the tickets of all the processes  Pi resets choosing[i] to false

Entry-section  Pi checks and see of any P j from the remaining n-1 processes are waiting for a ticket.  If yes, wait This is because P j might have requested a ticket concurrently and might get the same ticket value as P i 's; prepare for the worst case  If no proceed  Check the remaining processes for a P j such that  ticket[j] is non-zero, and (ticket[j], j) < (ticket[i], i)  Wait till this condition is false

Hardware support u Primitives  atomic operations: hardware instructions u Criterion for choosing primitives:  universality, i.e., being able to build arbitrary functionality (e.g. mutual exclusion, etc.) from simpler units  minimizing scope  don’t want to stop interrupts for whole critical sections

Classical hardware primitives u Test&Set bool testNset(bool& flag){ bool temp = flag; flag = true; return temp; } u Swap void swap(int& source, int& target){ int temp = target; target = source; source = temp; }

Critical section solutions u So far Bakery algorithm Hardware primitives u Drawbacks  Does not solve general synchronization problem critical section solved what about rendezvous, limited capacity section? multiple processors  Involves “busy-waiting” spinlock wastes CPU cycles Looking for more easily coded solution