COP 4600 Operating Systems Spring 2011

Slides:



Advertisements
Similar presentations
Operating Systems Part III: Process Management (Process Synchronization)
Advertisements

Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
COP 4600 Operating Systems Fall 2010 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:30-4:30 PM.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
COT 5611 Operating Systems Design Principles Spring 2012 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
COP 5611 Operating Systems Spring 2010 Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
COT 5611 Operating Systems Design Principles Spring 2012 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
COP 5611 Operating Systems Spring 2010 Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM.
COT 5611 Operating Systems Design Principles Spring 2012 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM.
1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization Advanced Operating System Fall 2012.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-5 Process Synchronization Department of Computer Science and Software.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Operating Systems CMPSC 473 Signals, Introduction to mutual exclusion September 28, Lecture 9 Instructor: Bhuvan Urgaonkar.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Sarah Diesburg Operating Systems COP 4610
COT 4600 Operating Systems Fall 2009
Chapter 5: Process Synchronization
COP 4600 Operating Systems Fall 2010
CGS 3763 Operating Systems Concepts Spring 2013
COP 4600 Operating Systems Spring 2011
COP 4600 Operating Systems Spring 2011
COT 5611 Operating Systems Design Principles Spring 2014
COT 5611 Operating Systems Design Principles Spring 2014
CGS 3763 Operating Systems Concepts Spring 2013
COP 4600 Operating Systems Fall 2010
Topic 6 (Textbook - Chapter 5) Process Synchronization
COT 5611 Operating Systems Design Principles Spring 2014
COP 4600 Operating Systems Fall 2010
Lecture 19 Syed Mansoor Sarwar
Module 7a: Classic Synchronization
COT 4600 Operating Systems Spring 2011
COP 4600 Operating Systems Fall 2010
COT 5611 Operating Systems Design Principles Spring 2012
COT 5611 Operating Systems Design Principles Spring 2014
Lecture 2 Part 2 Process Synchronization
Critical section problem
Grades.
CGS 3763 Operating Systems Concepts Spring 2013
Chapter 6: Process Synchronization
COP 5611 Operating Systems Spring 2010
CSE 153 Design of Operating Systems Winter 19
Chapter 6: Synchronization Tools
COP 4600 Operating Systems Fall 2010
COP 5611 Operating Systems Spring 2010
Process/Thread Synchronization (Part 2)
CSE 542: Operating Systems
CSE 542: Operating Systems
Presentation transcript:

COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM

Lecture 19 – Thursday March 31, 2011 Last time: Conditions for thread coordination – Safety, Liveness, Bounded-Wait, Fairness Critical sections – a solution to critical section problem Locks and Before-or-After actions. Hardware support for locks Deadlocks; Signals; Semaphores; Monitors Today: Solutions to HW5 Thread coordination with a bounded buffer. WAIT NOTIFY AWAIT ADVANCE SEQUENCE TICKET Scheduling Algorithms Next Time Multilevel memories Lecture 19

A solution to critical section problem Applies only to two threads Ti and Tj with i,j ={0,1} which share integer turn  if turn=i then it is the turn of Ti to enter the critical section boolean flag[2]  if flag[i]= TRUE then Ti is ready to enter the critical section To enter the critical section thread Ti sets flag[i]= TRUE sets turn=j If both threads want to enter then turn will end up with a value of either i or j and the corresponding thread will enter the critical section. Ti enters the critical section - in other words leaves the while loop - only if either flag[j]= FALSE Tj is not ready to enter or turn=i  it is the turn of Ti to enter The solution is correct Mutual exclusion is guaranteed The liveliness is ensured The bounded-waiting is met But this solution may not work as load and store instructions can be interrupted on modern computer architectures Lecture 19

Lecture 19

Problem 5.3 Virtual address space  range of memory addresses a thread/process is allowed to access. Each process runs in its own virtual address space. The problem requires a basic understanding on how virtual addresses are translated into real addresses as we discussed in Lecture 15. Lecture 19

Lecture 19

Lecture 19

Lecture 19

Problem 5.5 One writer rule  coordination is easier if each variable has only one writer. Lecture 19

Bounded buffer Events and signals can be used for thread coordination. The basic strategy: Thread A issues WAIT(event) Thread B issues a NOTIFY(event) Lecture 19

Lecture 19

NOTIFY could be sent before the WAIT and this causes problems The NOTIFY should always be sent after the WAIT. If the sender and the receiver run on two different processor there could be a race condition for the notempty event. Tension between modularity and locks Several possible solutions: AWAIT/ADVANCE, semaphores, etc Lecture 19

Solution We want a solution to prevent the unbounded wait when the NOTIFY(event) is sent before WAIT(event). This solution eliminated the need for NOTIFY(event) but it requires A shared variable for each event kept in kernel space Adds to the state of a thread the name of every event and a count for that event Two new system calls AWAIT and ADVANCE Lecture 19

Lecture 19

AWAIT - ADVANCE A WAITING state and two before-or-after actions that take a RUNNING thread into the WAITING state and back to RUNNABLE state. eventcount  variables with an integer value shared between threads and the thread manager; they are like events but have a value. The entry for a thread in the thread table must also include an event name and a value or count for that event for the thread. A thread in the WAITING state waits for a particular value of the eventcount AWAIT(eventcount,value) If eventcount >value  the control is returned to the thread calling AWAIT and this thread will continue execution If eventcount ≤value  the state of the thread calling AWAIT is changed to WAITING and the thread is suspended. ADVANCE(eventcount) increments the eventcount by one then searches the thread_table for threads waiting for this eventcount if it finds a thread and the eventcount exceeds the value the thread is waiting for then the state of the thread is changed to RUNNABLE Lecture 19

Implementation of AWAIT and ADVANCE Lecture 19

Lecture 19

Solution for a single sender and single receiver Lecture 19

Multiple senders: the sequencer Sequencer shared variable supporting thread sequence coordination -it allows threads to be ordered and is manipulated using two before-or-after actions. TICKET(sequencer)  returns a negative value which increases by one at each call. Two concurrent threads calling TICKET on the same sequencer will receive different values based upon the timing of the call, the one calling first will receive a smaller value. READ(sequencer)  returns the current value of the sequencer Lecture 19

Multiple sender solution; only the SEND is modified Lecture 19

Scheduling algorithms Scheduling  assigning jobs to machines. A schedule S  a plan on how to process N jobs using one or machines. Scheduling in the general case in a NP complete problem. A job 1 <= j <- N is characterized by Ci S  completion time of job j under schedule S pi  processing time ri  release time; the time when the job is available for processing di  due time ; the time when the job should be completed. ui =0 if Ci S <= di and ui =1 otherwise Lj = Ci S - di  lateness A schedule S is characterized by The makespan Cmax = max Ci S Average completion time Lecture 19