CS333 Intro to Operating Systems

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
1 Implementations: User-level Kernel-level User-level threads package each u.process defines its own thread policies! flexible mgt, scheduling etc…kernel.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Interprocess Communication
Operating Systems ECE344 Ding Yuan Synchronization (I) -- Critical region and lock Lecture 5: Synchronization (I) -- Critical region and lock.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
1 CS318 Project #3 Preemptive Kernel. 2 Continuing from Project 2 Project 2 involved: Context Switch Stack Manipulation Saving State Moving between threads,
1 CS 333 Introduction to Operating Systems Class 3 – Threads & Concurrency Jonathan Walpole Computer Science Portland State University.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
1 CS 333 Introduction to Operating Systems Class 4 – Synchronization Primitives Semaphores Jonathan Walpole Computer Science Portland State University.
Jonathan Walpole Computer Science Portland State University
Jonathan Walpole Computer Science Portland State University
Jonathan Walpole Computer Science Portland State University
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Computer Science 162 Discussion Section Week 3. Agenda Project 1 released! Locks, Semaphores, and condition variables Producer-consumer – Example (locks,
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
CS510 Concurrent Systems Introduction to Concurrency.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
Implementing Synchronization. Synchronization 101 Synchronization constrains the set of possible interleavings: Threads “agree” to stay out of each other’s.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Copyright ©: University of Illinois CS 241 Staff1 Threads Systems Concepts.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
CSC 660: Advanced Operating SystemsSlide #1 CSC 660: Advanced OS Synchronization.
CS510 Concurrent Systems Jonathan Walpole. Introduction to Concurrency.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
CS162 Section 2. True/False A thread needs to own a semaphore, meaning the thread has called semaphore.P(), before it can call semaphore.V() False: Any.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
Interprocess Communication Race Conditions
CSE 120 Principles of Operating
CS703 – Advanced Operating Systems
Process Synchronization
Background on the need for Synchronization
Atomic Operations in Hardware
Atomic Operations in Hardware
Chapter 5: Process Synchronization
Lecture 11: Mutual Exclusion
Jonathan Walpole Computer Science Portland State University
Jonathan Walpole Computer Science Portland State University
Topic 6 (Textbook - Chapter 5) Process Synchronization
Jonathan Walpole Computer Science Portland State University
Mutual Exclusion.
UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department
Coordination Lecture 5.
Global Environment Model
Implementing Mutual Exclusion
Concurrency: Mutual Exclusion and Process Synchronization
Implementing Mutual Exclusion
Lecture 11: Mutual Exclusion
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
Chapter 6: Synchronization Tools
Jonathan Walpole Computer Science Portland State University
Presentation transcript:

CS333 Intro to Operating Systems Jonathan Walpole

Concurrent Programming & Synchronization Primitives

Concurrency Two or more threads Parallel or pseudo parallel execution Can’t predict the relative execution speeds The threads access shared variables

Concurrency Example: One thread writes a variable that another thread reads Problem – non-determinism: The relative order of one thread’s reads and the other thread’s writes may determine the outcome!

Race Conditions What is a race condition? Why do race conditions occur?

Race Conditions A simple multithreaded program with a race: i++;

Race Conditions A simple multithreaded program with a race: ... load i to register; increment register; store register to i;

Race Conditions Why did this race condition occur? - two or more threads have an inconsistent view of a shared memory region (ie., a variable) - values of memory locations are replicated in registers during execution - context switches at arbitrary times during execution - threads can see stale memory values in registers

Race Conditions Race condition: whenever the result depends on the precise execution order of the threads! What solutions can we apply? prevent context switches by preventing interrupts make threads coordinate with each other to ensure mutual exclusion in accessing critical sections of code

Mutual Exclusion Conditions No two processes simultaneously in critical section No assumptions about speeds or numbers of CPUs No process running outside its critical section may block another process No process waits forever to enter its critical section

Mutual Exclusion Example

Enforcing Mutual Exclusion What about using locks? Locks can ensure exclusive access to shared data. - Acquiring a lock prevents concurrent access - Expresses intention to enter critical section Assumption: - Each shared data item has an associated lock - All threads set the lock before accessing the shared data - Every thread releases the lock after it is done

Acquiring and Releasing Locks Thread B Thread C Thread A Thread D Free Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Thread D Free Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Thread D Set Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Thread D Set Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Thread D Set Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Thread D Set Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Thread D Set Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Lock Thread D Lock Set Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Lock Thread D Lock Set Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Lock Unlock Thread D Lock Set Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Lock Unlock Thread D Lock Set Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Lock Thread D Lock Free Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Lock Thread D Lock Free Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Lock Thread D Lock Set Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Lock Thread D Lock Set Lock

Acquiring and Releasing Locks Thread B Thread C Thread A Lock Thread D Lock Set Lock

Mutual Exclusion (mutex) Locks An abstract data type used for synchronization The mutex is either: Locked (“the lock is held”) Unlocked (“the lock is free”)

Mutex Lock Operations Lock (mutex) Unlock (mutex) Acquire the lock if it is free … and continue Otherwise wait until it can be acquired Unlock (mutex) Release the lock If there are waiting threads wake one up

Using a Mutex Lock Shared data: Mutex myLock; 1 repeat 2 Lock(myLock); 3 critical section 4 Unlock(myLock); 5 remainder section 6 until FALSE 1 repeat 2 Lock(myLock); 3 critical section 4 Unlock(myLock); 5 remainder section 6 until FALSE

Implementing a Mutex Lock If the lock was just a binary variable, how would we implement the lock and unlock procedures?

Implementing a Mutex Lock Lock and Unlock operations must be atomic ! Many computers have some limited hardware support for setting locks Atomic Test and Set Lock instruction Atomic compare and swap/exchange operation These can be used to implement mutex locks

Test-and-Set-Lock (TSL) Instruction A lock is a single word variable with two values 0 = FALSE = not locked ; 1 = TRUE = locked Test-and-set-lock does the following atomically: Get the (old) value Set the lock to TRUE Return the old value If the returned value was FALSE... Then you got the lock!!! If the returned value was TRUE... Then someone else has the lock (so try again later)

Test and Set Lock P1 FALSE Lock

Test and Set Lock FALSE = Lock Available!! test P1 FALSE Lock

Test and Set Lock P1 set TRUE Lock

Test and Set Lock P1 P2 P3 TRUE TRUE TRUE TRUE TRUE P4 TRUE TRUE Lock

Test and Set Lock P1 P2 P3 TRUE TRUE TRUE TRUE TRUE P4 TRUE TRUE Lock

Test and Set Lock P1 P2 P3 TRUE TRUE TRUE TRUE TRUE P4 TRUE TRUE Lock

Test and Set Lock P1 P2 P3 FALSE TRUE P4 FALSE Lock

Test and Set Lock P1 P2 P3 FALSE TRUE TRUE P4 TRUE Lock

Test and Set Lock P1 P2 P3 FALSE TRUE P4 TRUE Lock

Test and Set Lock P1 P2 P3 TRUE TRUE TRUE P4 TRUE TRUE Lock

Using TSL Directly I J 1 repeat 2 while(TSL(lock)) 3 no-op; 4 critical section 5 Lock = FALSE; 6 remainder section 7 until FALSE 1 repeat while(TSL(lock)) 3 no-op; 4 critical section 5 Lock = FALSE; 6 remainder section 7 until FALSE Guarantees that only one thread at a time will enter its critical section

Implementing a Mutex With TSL 1 repeat 2 while(TSL(mylock)) 3 no-op; 4 critical section 5 mylock = FALSE; 6 remainder section 7 until FALSE Lock (mylock) Unlock (mylock) Note that processes are busy while waiting - this kind of mutex is called a spin lock

Busy Waiting Also called polling or spinning The thread consumes CPU cycles to evaluate when the lock becomes free ! Problem on a single CPU system... A busy-waiting thread can prevent the lock holder from running & completing its critical section & releasing the lock! * time spent spinning is wasted on a single CPU system - Why not block instead of busy wait ?

Blocking Primitives - Put a thread to sleep Sleep Wakeup Yield - Thread becomes BLOCKED Wakeup - Move a BLOCKED thread back onto “Ready List” - Thread becomes READY (or RUNNING) Yield - Put calling thread on ready list and schedule next thread - Does not BLOCK the calling thread! Just gives up the current time-slice

How Can We Implement Them? In User Programs: - System calls to the kernel In Kernel: - Calls to the thread scheduler routines

Synchronization in User Programs User threads call sleep and wakeup system calls Sleep and wakeup are system calls (in the kernel) - they manipulate the “ready list” - but the ready list is shared data - the code that manipulates it is a critical section - What if a timer interrupt occurs during a sleep or wakeup call? Problem: - How can scheduler routines be programmed to execute correctly in the face of concurrency?

Concurrency in the Kernel Solution 1: Disable interrupts during critical sections Ensures that interrupt handling code will not run … but what if there are multiple CPUs? Solution 2: Use mutex locks based on TSL for critical sections Ensures mutual exclusion for all code that follows that convention ... but what if your hardware doesn’t have TSL?

Disabling Interrupts Disabling interrupts in the OS vs disabling interrupts in user processes why not allow user processes to disable interrupts? is it ok to disable interrupts in the OS? what precautions should you take?

Disabling Interrupts in the Kernel Scenario 1: A thread is running; wants to access shared data Disable interrupts Access shared data (“critical section”) Enable interrupts

Disabling Interrupts in the Kernel Problem: Interrupts are already disabled and a thread wants to access the critical section ...using the above sequence... ie. One critical section gets nested inside another

Disabling Interrupts in the Kernel Problem: Interrupts are already disabled. Thread wants to access critical section using the previous sequence... Save previous interrupt status (enabled/disabled) Disable interrupts Access shared data (“critical section”) Restore interrupt status to what it was before

Doesn’t Work on Multiprocessors Disabling interrupts during critical sections Ensures that interrupt handling code will not run But what if there are multiple CPUs? A thread on a different CPU might make a system call which invokes code that manipulates the ready queue Disabling interrupts on one CPU didn’t prevent this! Solution: use a mutex lock (based on TSL) Ensures mutual exclusion for all code that uses it

Some Other Tricky Issues The interrupt handling code that saves interrupted state is a critical section It could be executed concurrently if multiple almost simultaneous interrupts happen Interrupts must be disabled during this (short) time period to ensure critical state is not lost What if this interrupt handling code attempts to lock a mutex that is held? What happens if we sleep with interrupts disabled? What happens if we busy wait (spin) with interrupts disabled?

Implementing Mutex Locks Without TSL If your CPU did not have TSL, how would you implement blocking mutex lock and unlock calls using interrupt disabling? … this is your next Blitz project !

Quiz 1. What is a race condition? 2. How can we protect against race conditions? 3. Can locks be implemented simply by reading and writing to a binary variable in memory? 4. How can a kernel make synchronization-related system calls atomic on a uniprocessor? - Why wouldn’t this work on a multiprocessor? 5. Why is it better to block rather than spin on a uniprocessor? 6. Why is it sometimes better to spin rather than block on a multiprocessor?