A Simple Critical Section Protocol There are N concurrent processes P 1,…,P N that share some data. A process accessing the shared data is said to execute.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 7 - Resource Access Protocols (Critical Sections) Protocols: No Preemptions During Critical Sections Once a job enters a critical section, it cannot.
CS 603 Process Synchronization: The Colored Ticket Algorithm February 13, 2002.
Mutual Exclusion – SW & HW By Oded Regev. Outline: Short review on the Bakery algorithm Short review on the Bakery algorithm Black & White Algorithm Black.
Synchronization and Deadlocks
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Multiprocessor Synchronization Algorithms ( ) Lecturer: Danny Hendler The Mutual Exclusion problem.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
CPSC 668Set 7: Mutual Exclusion with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Concurrent & Distributed Systems Lecture 4: ME solutions Lecture 3 considered possible algorithms for achieving Mutual Exclusion between the critical sections.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Process Synchronization
Semaphores CSCI 444/544 Operating Systems Fall 2008.
Chapter 6 – Concurrent Programming Outline 6.1 Introduction 6.2Monitors 6.2.1Condition Variables 6.2.2Simple Resource Allocation with Monitors 6.2.3Monitor.
1 Chapter 6: Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2007 Chapter 6 of textbook.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Semaphores. Readings r Silbershatz: Chapter 6 Mutual Exclusion in Critical Sections.
Concurrency, Mutual Exclusion and Synchronization.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 3 (26/01/2006) Instructor: Haifeng YU.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
28/10/1999POS-A1 The Synchronization Problem Synchronization problems occur because –multiple processes or threads want to share data; –the executions.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Games Development 2 Concurrent Programming CO3301 Week 9.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE
Synchronizing Threads with Semaphores
Problems with Semaphores Used for 2 independent purposes –Mutual exclusion –Condition synchronization Hard to get right –Small mistake easily leads to.
Operating Systems Lecture Notes Synchronization Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
CS 2200 Presentation 18b MUTEX. Questions? Our Road Map Processor Networking Parallel Systems I/O Subsystem Memory Hierarchy.
Program Correctness. The designer of a distributed system has the responsibility of certifying the correctness of the system before users start using.
CPSC 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 6: Mutual Exclusion in Shared Memory 1.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Homework-6 Questions : 2,10,15,22.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
CS 311/350/550 Semaphores. Semaphores – General Idea Allows two or more concurrent threads to coordinate through signaling/waiting Has four main operations.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Semaphores Synchronization tool (provided by the OS) that does not require busy waiting. Logically, a semaphore S is an integer variable that, apart from.
Process Synchronization: Semaphores
Background on the need for Synchronization
Process Synchronization
CS533 Concepts of Operating Systems Class 3
Chapter 5: Process Synchronization
Midterm review: closed book multiple choice chapters 1 to 9
Semaphore Originally called P() and V() wait (S) { while S <= 0
Process Synchronization
Lecture 2 Part 2 Process Synchronization
CS533 Concepts of Operating Systems Class 3
Thread Synchronization including Mutual Exclusion
CSE 153 Design of Operating Systems Winter 2019
“The Little Book on Semaphores” Allen B. Downey
CSE 542: Operating Systems
Presentation transcript:

A Simple Critical Section Protocol There are N concurrent processes P 1,…,P N that share some data. A process accessing the shared data is said to execute in its critical section. P i executes the following code: For simplicity, we assume the external section is one instruction. repeat for ever: external; critical section(cs);

Mutual exclusion requirement: at most one process executing in its critical section at a given moment in time. Idea: use a semaphore to coordinate the processes. the semaphore(Sem) can hold 2 values: 0(red) and 1(green) initially Sem is 1 the semaphore access operations are request and release

when a process P i wants to execute its cs, it requires access to the semaphore if Sem is 1 then P i starts executing the cs; Sem becomes 0 if Sem is 0 then P i waits when P i finishes executing the cs, it releases the semaphore; Sem becomes 1 and some waiting process may start executing its cs. More realistic semaphores (with integer values and waiting queues) will be considered later.

Each process P i is executing this simple mutual-exclusion protocol with one semaphore: repeat for ever: 0 external; 1 request; 2 critical section(cs); 3 release;

The goal of the paper is to prove the protocol guarantees mutual exclusion. There are multiple ways to prove the protocol correct. Two possible approaches: 1.using global states and histories 2.using events described by means of predicates and functions. The paper advocates using the 2 nd approach to proving concurrency protocols.

First Proof: Global States and Histories (the classical approach) For each process P i the program counter variable PCi is considered and Thus we have N+1 variables in the system: Also A global state is a function S with domain V such that each variable takes values in its corresponding type; S describes a possible instantaneous situation of all processes and the semaphore.

To model a possible history of the system we repeatedly apply one of the following steps: 1.Executions of line 0 by P i : from S 1 with S 1 (PC i )=0 to S 2 that differs from S 1 only by S 2 (PC i )=1 2.Executions of line 1 by P i : from S 1 with S 1 (PC i )=1 and S 1 (Sem)=1 to S 2 that differs from S 1 only by S 2 (PC i )=2 and S 2 (Sem)=0 3.Executions of line 2 by P i : from S 1 with S 1 (PC i )=2 to either S 2 =S 1 or S 2 that differs from S 1 only by S 2 (PC i )=3 4.Executions of line 3 by P i : from S 1 with S 1 (PC i )=3 and S 1 (Sem)=0 to S 2 that differs from S 1 only by S 2 (PC i )=0 and S 2 (Sem)=1.

Then the mutual exclusion condition is written as: If is a history, and if k is any index, then it is not the case that S k (PC i ) =2 and S k (PC j )=2 for different i and j. The initial state S 0 has S 0 (PC i )=0 for all i and S 0 (Sem)=1. This can be shown with the aid of the following lemma proven by induction on k: