CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Operating Systems Part III: Process Management (Process Synchronization)
Ch 7 B.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Concurrent Programming James Adkison 02/28/2008. What is concurrency? “happens-before relation – A happens before B if A and B belong to the same process.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 13: October 12, 2010 Instructor: Bhuvan Urgaonkar.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
6: Process Synchronization 1 1 PROCESS SYNCHRONIZATION I This is about getting processes to coordinate with each other. How do processes work with resources.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
Concurrency, Mutual Exclusion and Synchronization.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Process Synchronization Concurrent access to shared data may result in data inconsistency. Maintaining data consistency requires mechanisms to ensure the.
Chapter 6: Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-5 Process Synchronization Department of Computer Science and Software.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Operating Systems Lecture Notes Synchronization Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Practice Chapter Five.
CS4315A. Berrached:CMS:UHD1 Process Synchronization Chapter 8.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Homework-6 Questions : 2,10,15,22.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
6.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Synchronization Background The Critical-Section Problem Peterson’s.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Chapter 6: Process Synchronization
Process Synchronization
Chapter 5: Process Synchronization
Process Synchronization: Semaphores
Chapter 6-7: Process Synchronization
Chapter 5: Process Synchronization
Chapter 6: Process Synchronization
Chapter 6: Process Synchronization
Chapter 5: Process Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
Semaphore Originally called P() and V() wait (S) { while S <= 0
Process Synchronization
Lecture 19 Syed Mansoor Sarwar
Module 7a: Classic Synchronization
Process Synchronization
Critical section problem
Grades.
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Process Synchronization
Chapter 6: Synchronization Tools
Process/Thread Synchronization (Part 2)
Presentation transcript:

CH7 discussion-review Mahmoud Alhabbash

Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes or threads access and manipulate shared data concurrently. The final value or the result depends upon the order of processes' execution, i.e., which process finishes last. – To prevent race conditions, concurrent processes must be synchronized or atomic operations need to be ensured.

Q2 What is Atomic operation? Why is that important? – An operation that cannot be interrupted during its execution. Without this, it is nearly impossible to maintain consistent state for shared variables involved more than one process or thread.

Critical Section Different processes have their own critical sections. – The tasks inside the critical sections are not the same. Process A Copy value from shared memory; Plus 10; Update shared memory; Process B Copy value from shared memory; Minus 10; Update shared memory; Critical Section

Q3. What are the three conditions that a solution to the critical section problem must guarantee? – Mutual exclusion if a process is accessing a shared object, other processes must be excluded from accessing the same shared object – Progress If no process is in its CS and one or more processes that wish to enter their CS, it must be possible for those processes (not in its remainder section) to negotiate who will proceed next into CS – bounded waiting After a process has made a request to enter CS OTHER processes have a limited number of times that they can enter CS ((no process need to wait forever))

Ways to do Mutual Exclusion Hardware solution – Disabling interrupts  disable context switching inside the critical section. – Correct but not attractive solution: Terrible to let user programs to disable/enable interrupts. Not working in multiprocessor systems. May disable system clock critical section Interrupt disabled Interrupt enabled Program code

Ways to do Mutual Exclusion Software solution – Peterson’s solution as an example to illustrate the idea restricted to two processes – Bakery algorithm For multiple processes – Semaphores – Monitors

Peterson ’ s solution The two processes share two variables: – int turn; – Boolean flag[2] The variable turn indicates whose turn it is to enter the critical section. The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process P i is ready! do { flag[i] = TRUE; turn = j; while ( flag[j] && turn == j); CRITICAL SECTION flag[i] = FALSE; REMAINDER SECTION } while (TRUE); Two processes solution Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted.

Proof of Peterson ’ s solution Mutual exclusion is preserved – If both processes are in critical section Then flag[0] = flag[1] = TRUE –  satisfying the first condition of the while loop But turn can only be 0 or 1 –  one of the processes must have to wait. – So impossible to have both processes in critical section do{ flag[0] = TRUE; turn = 1; while(flag[1] && turn ==1); /** CRITICAL SECTION **/ flag[0] = FALSE; /* REMAINDER SECTION */ }while(TRUE) Process 0 do{ flag[1] = TRUE; turn = 0; while(flag[0] && turn ==0); /** CRITICAL SECTION **/ flag[1] = FALSE; /* REMAINDER SECTION */ }while(TRUE) Process 1 flag 01 turn = 0 turn = 1 T T

Proof of Peterson ’ s solution Progress requirement – If no process is in its CS and one or more processes that wish to enter their CS, it must be possible for those processes (not in its remainder section) to negotiate who will proceed next into CS do{ flag[0] = TRUE; turn = 1; while(flag[1] && turn ==1); /** CRITICAL SECTION **/ flag[0] = FALSE; /* REMAINDER SECTION */ }while(TRUE) Process 0 Use turn to decide which process will enter CS If a process is in RS, can’t set turn, that is can’t take part in the decision

Proof of Peterson ’ s solution Bounded-waiting requirement – After a process has made a request to enter CS – OTHER processes have a limited number of times (1 time in Peterson’s solution) that they can enter CS –  no process need to wait forever do{ flag[0] = TRUE; turn = 1; while(flag[1] && turn ==1); /** CRITICAL SECTION **/ flag[0] = FALSE; /* REMAINDER SECTION */ }while(TRUE) Process 0 do{ flag[1] = TRUE; turn = 0; while(flag[0] && turn ==0); /** CRITICAL SECTION **/ flag[1] = FALSE; /* REMAINDER SECTION */ }while(TRUE) Process 1

Proof of Peterson ’ s solution Bounded-waiting requirement – After a process has made a request to enter CS – OTHER processes have a limited number of times (1 time in Peterson’s solution) that they can enter CS –  no process need to wait forever do{ flag[0] = TRUE; turn = 1; while(flag[1] && turn ==1); /** CRITICAL SECTION **/ flag[0] = FALSE; /* REMAINDER SECTION */ }while(TRUE) Process 0 do{ flag[1] = TRUE; turn = 0; while(flag[0] && turn ==0); /** CRITICAL SECTION **/ flag[1] = FALSE; /* REMAINDER SECTION */ }while(TRUE) Process 1

Question 7.4 (Decker Algorithm) P0 and P1, share the following variables: – boolean flag[2]; /* initially false */ – int turn; The structure of process Pi (i == 0 or 1) and Pj (j == 0 or 1) is the following: while (true) { flag[i] = true; while (flag[j]) { flag[i] = false; while(turn == j); flag[i] = true; } critical(); turn = j; flag[i] = false; remainder(); }

Mutual exclusion Ans 7.4 Mutual exclusion is ensured through the use of the flag and turn variables. If both processes set their flag to true, only one will succeed. Namely, the process whose turn it is. The waiting process can only enter its critical section when the other process updates the value of turn.

Ans 7.4 Progress Progress is provided, again through the flag and turn variables. if a process wishes to access its critical section, it can set its flag variable to true and enter their critical section. It only sets turn to the value of the other process upon exiting its critical section.

Ans :7.4 Bounded waiting Bounded waiting is preserved through the use of the turn variable. Assume two processes wish to enter their respective critical sections. They both set their value of flag to true, however only the thread whose turn it is can proceed, the other thread waits. If bounded waiting were not preserved, it would therefore be possible that the waiting process would have to wait indefinitely while the first process repeatedly entered - and exited - its critical section. However, Dekker's algorithm has a process set the value of turn to the other process, thereby ensuring that the other process will enter its critical section next.

Question 7.5 (Eisenberg McGuire Algorithm)

Q5 What is the meaning of the term busy waiting? – a process is waiting for a condition to be satisfied in a tight loop without relinquishing the processor. – Alternatively, a process could wait by relinquishing the processor, and block on a condition (e.g., I/O, semaphore) and wait to be awakened at some appropriate time in the future.

Q6 Can busy waiting be avoided altogether? Explain your answer?. – Busy waiting can be avoided but increase the overhead – putting a process to sleep and having to wake it up when the appropriate program state is reached.

Q7. Semaphore The signal() operation is used with semaphores and monitors. Explain the key difference in its runtime behavior in the two cases. (Hint: consider how this affects the wait() operation in another process) ? – In semaphores, every signal results in a corresponding increment of the semaphore value even if there are no process waiting. A future wait() operation could immediately succeed because of the earlier increment – When the signal() operation is used in monitors, if a signal is performed and if there are no waiting processes, the signal is simply ignored and the system does not remember the fact that the signal took place. If a subsequent wait operation is performed, then the corresponding thread simply blocks.

semaphore mutex, empty, full; mutex=1; empty=0; full=N; Producer: do{ … // Produce an item in nextp … wait(mutex); wait(empty); … // Add nextp to buffer …. signal (mutex); signal (full); }while(true); Consumer: do{ wait (mutex); wait (full); … // remove an item from buffer to nextc … signal (mutex); signal (empty); … // consume the item in nextc … } while(true); //Should be empty=N //Should be full=0 // Order should // be switched // Order should // be switched Q8

Q9 WriterReader do { wait (wrt) ; // writing is performed signal (wrt) ; } while (true) do { wait (mutex) ; readcount ++ ; if (readcount == 1) wait(wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (readcount == 0) signal(wrt) ; signal (mutex) ; } while (true) 1)What is the purpose of the semaphore “ wrt ” ? To guarantee mutual exclusion to the critical section 1)What is the purpose of the semaphore “ mutex ” ? To guarantee mutual exclusion when updating the shared variable readcount 1)Suppose a writer process is inside its critical section, while another writer and n readers are waiting outside their critical sections. Which semaphores are they waiting on, respectively? the writer is waiting on wrt, the 1st reader is waiting on wrt and the other n-1 readers are waiting on mutex