CS510 Operating System Foundations

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

CHAPTER3 Higher-Level Synchronization and Communication
Ch 7 B.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
1 Semaphores and Monitors CIS450 Winter 2003 Professor Jinhua Guo.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
1 Administrivia  Assignments 0 & 1  Class contact for the next two weeks  Next week  midterm exam  project review & discussion (Chris Chambers) 
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
1 CS 333 Introduction to Operating Systems Class 7 - Deadlock Jonathan Walpole Computer Science Portland State University.
Language Support for Concurrency. 2 Common programming errors Process i P(S) CS P(S) Process j V(S) CS V(S) Process k P(S) CS.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
Jonathan Walpole Computer Science Portland State University
1 OMSE 510: Computing Foundations 7: More IPC & Multithreading Chris Gilmore Portland State University/OMSE Material Borrowed from Jon Walpole’s lectures.
1 CS 333 Introduction to Operating Systems Class 4 – Synchronization Primitives Semaphores Jonathan Walpole Computer Science Portland State University.
Jonathan Walpole Computer Science Portland State University
Monitors CSCI 444/544 Operating Systems Fall 2008.
1 CS 333 Introduction to Operating Systems Class 7 - Deadlock Jonathan Walpole Computer Science Portland State University.
CS533 Concepts of Operating Systems Class 3 Monitors.
Jonathan Walpole Computer Science Portland State University
1 CSE 513 Introduction to Operating Systems Class 4 - IPC & Synchronization (2) Deadlock Jonathan Walpole Dept. of Comp. Sci. and Eng. Oregon Health and.
Jonathan Walpole Computer Science Portland State University
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
CS510 Concurrent Systems Introduction to Concurrency.
1 CS 333 Introduction to Operating Systems Class 6 – Monitors and Message Passing Jonathan Walpole Computer Science Portland State University.
4061 Session 21 (4/3). Today Thread Synchronization –Condition Variables –Monitors –Read-Write Locks.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
CSE 451: Operating Systems Winter 2012 Semaphores and Monitors Mark Zbikowski Gary Kimura.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Problems with Semaphores Used for 2 independent purposes –Mutual exclusion –Condition synchronization Hard to get right –Small mistake easily leads to.
CS533 – Spring Jeanie M. Schwenk Experiences and Processes and Monitors with Mesa What is Mesa? “Mesa is a strongly typed, block structured programming.
IT 344: Operating Systems Winter 2008 Module 7 Semaphores and Monitors
Chapter 71 Monitors (7.7)  A high-level-language object-oriented concept that attempts to simplify the programming of synchronization problems  A synchronization.
Synchronicity II Introduction to Operating Systems: Module 6.
CS533 Concepts of Operating Systems Class 2a Monitors.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion Mutexes, Semaphores.
Operating Systems NWEN 301 Lecture 6 More Concurrency.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems The producer consumer problem Monitors and messaging.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
CS703 - Advanced Operating Systems
CSE 120 Principles of Operating
CIS Operating Systems Synchronization
CS533 Concepts of Operating Systems Class 3
Jonathan Walpole Computer Science Portland State University
Operating Systems CMPSC 473
Chapter 5: Process Synchronization
CS510 Operating System Foundations
Jonathan Walpole Computer Science Portland State University
CSE 451: Operating Systems Winter 2004 Module 7+ Monitor Supplement
CSE 451: Operating Systems Winter Module 8 Semaphores and Monitors
CS533 Concepts of Operating Systems Class 3
CSE 451: Operating Systems Autumn Lecture 8 Semaphores and Monitors
CSE 451: Operating Systems Autumn Lecture 7 Semaphores and Monitors
Chapter 6 Synchronization Principles
CSE 451: Operating Systems Winter Module 7 Semaphores and Monitors
CSE 451: Operating Systems Winter Module 7 Semaphores and Monitors
CS333 Intro to Operating Systems
Oregon Health and Science University
CSE 451: Operating Systems Winter Module 7 Semaphores and Monitors
Monitors and Inter-Process Communication
Process/Thread Synchronization (Part 2)
Presentation transcript:

CS510 Operating System Foundations Jonathan Walpole

Monitor Semantics, Message Passing, & Concurrency Review

Overview of Monitor Structure monitor’s shared data monitor entry queue x condition variables (these are also shared monitor variables!) y Each condition variable has an associated list of waiting threads List of threads waiting to enter the monitor “entry” methods Can be called from outside the monitor. Only one active at any moment. local methods initialization code

Compiler Support for Monitors If monitors are part of the programming language, the compiler can generate the monitor lock and unlock code automatically programmer may simply define the type of a data item to be monitor (or synchronized) the monitor mutex to be released in wait is identified implicitly by the condition variable’s monitor If monitors are unknown to the compiler (as in Blitz), the programmer writes the monitor mutex code - wait takes the monitor mutex as a parameter

Producer-Consumer with Monitors process Producer begin loop <produce char “c”> BoundedBuffer.deposit(c) end loop end Producer monitor: BoundedBuffer var buffer : ...; nextIn, nextOut :... ; entry deposit(c: char) begin ... end entry remove(var c: char) end BoundedBuffer process Consumer begin loop BoundedBuffer.remove(c) <consume char “c”> end loop end Consumer

Use of Condition Variables monitor : BoundedBuffer var buffer : array[0..n-1] of char nextIn,nextOut : 0..n-1 := 0 fullCount : 0..n := 0 notEmpty, notFull : condition entry deposit(c:char) entry remove(var c: char) begin begin if (fullCount = n) then if (fullCount = n) then wait(notFull) wait(notEmpty) end if end if buffer[nextIn] := c c := buffer[nextOut] nextIn := nextIn+1 mod n nextOut := nextOut+1 mod n fullCount := fullCount+1 fullCount := fullCount-1 signal(notEmpty) signal(notFull) end deposit end remove end BoundedBuffer

Condition Variables vs Semaphores OK to wait on a condition variable while holding the monitor’s mutex because the wait call atomically releases the mutex before sleeping and aquires it again after waking, and before returning Not OK to down on a semaphore while holding the mutex needed by the thread that will issue the corresponding up!

Other Observations In this example, monitor types are part of the programming language The compiler is enforcing mutual exclusion among accesses to a monitor type the monitor mutex lock and unlock operations are not visible in the source code the monitor lock is not passed as a parameter to wait

Condition Variables in Blitz Monitors are not known to the KPL compiler in Blitz! Condition class is used to implement monitors Condition.wait (mutex) Condition.signal (mutex) Condition.broadcast(mutex) Mutex must be passed to condition methods to indicate which monitor the condition is in Mutex lock/unlock code must be written by the programmer on entry/exit to/from monitor methods

Condition Variable Semantics Mutual exclusion requirement: - Only one thread at a time can execute in the monitor Scenario: - Thread A is executing in the monitor - Thread A does a signal waking up thread B Which thread runs in the monitor next? A and B must run in some order Which one runs, which one blocks, and how (on what queue)?

Option 1: Hoare Semantics What happens when A signals B? A is suspended (added to waiting thread list for monitor mutex) B wakes up and runs immediately (i.e., nothing else runs in the monitor between the signal and the wakeup of B) A hands B the monitor mutex directly A can only run when B leaves the monitor on exit or in another wait - A only resumes immediately if it is at the head of the mutex list - If not, other threads run before it

Memory Invariance under Hoare Semantics When B wakes up, the state of the monitor’s variables are the same as when A issued the signal When A resumes after signal, the state of the monitor’s variables may have changed! Implications: - Memory invariance is lost across wait - Memory invariance is lost across signal - Memory invariance is preserved across signal to wakeup from wait

Option 2: MESA Semantics What happens when A signals B? A continues executing in the monitor B tries to lock the monitor mutex, and takes its place at the back of the list B resumes when its turn comes around

Memory Invariance under MESA Semantics When B wakes up, the state of the monitor’s variables may not be the same as when A issued the signal! the signal is more like a hint the waking thread must check to see if it should continue or wait again When A continues after signal, the state of the monitor’s variables have not changed, because A retained the mutex

Memory Invariance under MESA Semantics Implications: - Memory invariance is lost across wait - Memory invariance is not lost across signal Memory invariance is not preserved across signal to wakeup from wait The different memory invariance effect determine how you write code that uses condition variables! - You really need to know the semantics of your condition variables!!!

Example Use of Hoare Semantics monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char) if cntFull == N notFull.Wait() endIf buffer[nextIn] = c nextIn = (nextIn+1) mod N cntFull = cntFull + 1 notEmpty.Signal() endEntry entry remove() ... endMonitor Hoare Semantics

Example Use of Mesa Semantics monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char) while cntFull == N notFull.Wait() endWhile buffer[nextIn] = c nextIn = (nextIn+1) mod N cntFull = cntFull + 1 notEmpty.Signal() endEntry entry remove() ... endMonitor MESA Semantics

Example Use of Hoare Semantics monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char) ... entry remove() if cntFull == 0 notEmpty.Wait() endIf c = buffer[nextOut] nextOut = (nextOut+1) mod N cntFull = cntFull - 1 notFull.Signal() endEntry endMonitor Hoare Semantics

Example Use of Mesa Semantics monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char) ... entry remove() while cntFull == 0 notEmpty.Wait() endWhile c = buffer[nextOut] nextOut = (nextOut+1) mod N cntFull = cntFull - 1 notFull.Signal() endEntry endMonitor MESA Semantics

Monitors in Blitz They have MESA semantics When a waiting thread resumes, you can’t assume that the state of the monitor’s variables is the same as when signal was called! There is a broadcast call in addition to signal broadcast wakes up all threads waiting on the condition variable, not just the first one broadcast makes no sense for condition variables with Hoare semantics!

Implementing Hoare Semantics In Assignment 4 you must modify Blitz condition variables to have Hoare semantics: - Do not modify the mutex methods provided, because future code will use them - Create new classes: MonitorLock -- similar to Mutex HoareCondition -- similar to Condition You must write your own test code to determine if they actually have Hoare semantics!

Message Passing

Message Passing Inter-Process Communication (IPC) system calls to send and receive messages to/from ports - for shared memory architectures and distributed systems - receive can block a thread and add it to a waiting thread’s list for the port (like wait and down) - send unblocks a thread from the list and makes it ready to run (like signal and up) Message passing can be used for synchronization or general communication

Message Passing Example Producer-consumer example: The producer sends the data to consumer in a message The consumer receives the data in the message The system buffers outstanding messages (kept in order) We need flow control to prevent the producer out-running the consumer! Flow control idea: The consumer sends empty messages to the producer The producer blocks waiting for empty messages The consumer starts by sending N empty messages N is based on the consumer’s buffer size

Message Passing Consumer const N = 100 -- Size of message buffer var em: char for i = 1 to N -- Get things started by Send (producer, &em) -- sending N empty messages endFor thread consumer var c, em: char while true Receive(producer, &c) -- Wait for a char Send(producer, &em) -- Send empty message back // Consume char... endWhile end

Message Passing Producer thread producer var c, em: char while true // Produce char c... Receive(consumer, &em) -- Wait for an empty msg Send(consumer, &c) -- Send c to consumer endWhile end

Buffering Design Choices Option 1: System Buffering - System stores sent, but not yet received, messages - Buffering capacity is finite - Sender will be blocked if the buffer is full - Receiver will be blocked if the buffer is empty

Buffering Design Choices Option 2: No buffering - If Send happens first, the sender blocks - If Receive happens first, the receiver blocks - Sender and receiver must Rendezvous (ie. meet) to exchange the message - Both threads are ready for the transfer - The data is copied directly from one to the other - Both threads are then allowed to proceed

Thread-Safe Routines

Thread-Safe Functions Consider this library function... var count: int = 0 function GetUnique () returns int count = count + 1 return count endFunction Is it safe to execute concurrently in multiple threads?

Thread-Safe Functions Consider this function... var count: int = 0 function GetUnique () returns int count = count + 1 return count endFunction What if it is executed by different threads concurrently? The results may be incorrect! This routine is not reentrant!

Local vs Global Variables Variables that are local to the function invocation are allocated (dynamically at function-call time) on the stack of the calling thread. these are only accessible to this one thread! they are not directly involved in race conditions Static variables, allocated at compile time, and dynamically allocated variables (malloc) on the heap, are global variables they are accessible to all threads they can be involved in race conditions they require synchronization!

Does This Work? var count: int = 0 myLock: Mutex function GetUnique () returns int myLock.Lock() count = count + 1 myLock.Unlock() return count endFunction

What About This? var count: int = 0 myLock: Mutex function GetUnique () returns int myLock.Lock() count = count + 1 return count myLock.Unlock() endFunction

Solution var count: int = 0 myLock: Mutex function GetUnique () returns int var i: int myLock.Lock() count = count + 1 i = count myLock.Unlock() return i endFunction

Concurrency/Synchronization Review Race conditions Mutual exclusion concept - Interrupt disabling - Atomic instructions - Spin locks - Blocking locks Semaphores Binary semaphores Counting semaphores Monitors and condition variables