Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Operating Systems Part III: Process Management (Process Synchronization)
Ch 7 B.
1 Implementations: User-level Kernel-level User-level threads package each u.process defines its own thread policies! flexible mgt, scheduling etc…kernel.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Interprocess Communication
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
1 Threads CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Semaphores. Announcements No CS 415 Section this Friday Tom Roeder will hold office hours Homework 2 is due today.
Synchronization Principles. Race Conditions Race Conditions: An Example spooler directory out in 4 7 somefile.txt list.c scores.txt Process.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
Processes 1 CS502 Spring 2006 Processes Week 2 – CS 502.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Semaphores. Readings r Silbershatz: Chapter 6 Mutual Exclusion in Critical Sections.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Chapter 6: Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Operating Systems CMPSC 473 Signals, Introduction to mutual exclusion September 28, Lecture 9 Instructor: Bhuvan Urgaonkar.
Operating System Concepts and Techniques Lecture 13 Interprocess communication-2 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion Mutexes, Semaphores.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6: Process Synchronization
CS703 – Advanced Operating Systems
Process Synchronization
Process Synchronization: Semaphores
Background on the need for Synchronization
Concurrency.
Chapter 5: Process Synchronization
143a discussion session week 3
Topic 6 (Textbook - Chapter 5) Process Synchronization
Semaphore Originally called P() and V() wait (S) { while S <= 0
Lecture 2 Part 2 Process Synchronization
Critical section problem
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6 Synchronization Principles
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Process/Thread Synchronization (Part 2)
CSE 542: Operating Systems
Presentation transcript:

Race Conditions CS550 Operating Systems

Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example. You will implement these in the programming assignment, and see that in some cases, multiple threads can be more efficient than multiple processes. We have seen that threads can share memory and that processes can do so as well, but in a different fashion. Interprocess communication can be more complicated than thread based communication as we need a socket, shared memory, stream, etc.

Review of Process States We have seen process states as well, and we have seen that threads take on the same states.

CPU and I/O requests CPU burst - servicing of CPU request I/O burst - servicing of I/O req Process behavior is represented as a sequence of waits, I/O services (burst), and CPU bursts Scheduling must be used to deal with this

Process Execution An executing process is in run state OS may interrupt this process at any time and switch to another process Why? I/O needs – suspend one process – allow another to use an I/O device Timer – process used too much CPU time – need to share – move one to ready state so another may run Higher priority process – interrupts a lower priority one

Context Switching Dispatcher selects the next process to run from the ready queue to run We said before that a context switch is when the OS interrupts an executing process to allow another to run – needs to save state (i.e. save the PCB) – move image into memory or perhaps to disk (virtual memory) Time taken for context switch is called overhead

Context Switch Overhead Many context switches cost CPU time. So far, we have seen processes and threads and how to use them What about sharing memory?

Example: A Race Condition See example code Run the code! What goes wrong? Hint: we need something called synchronization

Synchronization Tools Semaphores Atomic operations Mutual exclusion (mutex) locks Spin Locks TSL Locks Condition variables and Monitors

Critical Sections Before investigating tools, we need to define critical sections A critical section is an area of code where shared resource(s) is/are used We would like to enforce mutual exclusion on critical sections to prevent problematic situations like two processes trying to modify the same variable at the same time Mutual exclusion is when only one process/thread is allowed to access a shared resource while all others must wait

Real Life Critical Section | | Road B | | Road A | | > X < Critical Section at X ^ | | |

An example with code Assume 2 threads A, and B and a shared counter variable (type int) – A increments a shared counter – B decrements the same shared counter Thread A pseudocode counter ++ Thread B pseudocode counter --

Counter Operations Decomposed What do the counter operations actually do? counter = counter + 1; Machine operations for thread A: load reg1, counter add reg1, reg1, 1 store counter, reg1 Machine operations for thread B: load reg2, counter add reg2, reg2, 1 store counter, reg2

Run One Thread at a Time – No Problems |reg1|reg2|counter| ThreadA | ThreadB | | 10 | | 10 |reg1<-counter | | | 11 | | 10 |reg1<-reg1+1 | | | 11 | | 11 |counter<-reg1 | | | | 11 | 11 | | reg2<-counter | | | 10 | 11 | | reg2<-reg2-1 | | | 10 | 10 | | counter<-reg2 |

What if we allow for Context Switching? |reg1|reg2|counter| ThreadA | ThreadB | | 10 | | 10 |reg1<-counter | | | | 10 | 10 | | reg2<-counter | | 11 | | 10 |reg1<-reg1+1 | | | | 9 | 10 | | reg2<-reg2-1 | | 11 | | 11 |counter<-reg1 | | | | 9 | 9 | | counter<-reg2 |

Avoiding Race Conditions The problem on the previous slide is called a race condition. To avoid race conditions, we need synchronization. Many ways to provide synchronization: – Semaphore – Mutex lock – Atomic Operations – Monitors – Spin Lock – Many more

Mutual Exclusion Mutual exclusion means only one process (thread) may access a shared resource at a time. All others must wait. Recall that critical sections are segments of code where a process/thread accesses and uses a shared and uses a shared resource.

Another example where synchronization is needed | Process 1 | | Process 2 | | | | x++ | | x = x * 3 | | y = x | | y = 4 + x | | |

Requirements for Mutual Exclusion Processes need to meet the following conditions for mutual exclusion on critical sections. 1.Mutual exclusion by definition 2.Absence of starvation - processes wait a finite period before accessing/entering critical sections. 3.Absence of deadlock - processes should not block each other indefinitely.

Synchronization Methods busy wait - use Dekker's or Peterson's algorithm (consumes CPU cycles) Disable interrupts and use special machine instructions (Test-set-lock or TSL, atomic operations, and spin locks) Use OS mechanisms and programming languages (semaphores and monitors)

Semaphores Semaphore - abstract data type that functions as a software synchronization tool to implement a solution to the critical section problem Includes a queue, waiting, and signaling functionality Includes a counter for allowing multiple accesses Available in both Java and C

Java Example of a Semaphore Class Semaphore s; s = new Semaphore(1); class Semaphore { private int sem; private Pqueue sem_q; public Semphore(int initVal) { //initialize semaphore } public acquire() { //acquire the semaphore or add to wait list } public release() { //add one to semaphore count value if not //maxed out }

C-Style Semaphores Using C we will simply pass in the semaphore structure to related functions. After initialization, the semaphore only works with two operations: acquire (also called wait) and release (also called signal). Semaphores can be used to count or in a binary fashion (to provide only mutual exclusion) by initializing with a value of one.

Using Semaphores To use: 1) invoke wait on S. This tests the value of its integer attribute sem. – If sem > 0, it is decremented and the process is allowed to enter the critical section – Else, (sem == 0) wait suspends the process and puts it in the semaphore queue 2) Execute the critical section 3) Invoke signal on S, increment the value of sem and activate the process at the head of the queue 4) Continue with normal sequence of instructions

Semaphore Pseudocode void wait() if(sem > 0) sem-- else put process in the wait queue sleep() void signal() if (sem < maxVal) sem++ if queue non empty remove process from wait queue wake up process

Synchronization with Semaphores Simple synchronization is easy with semaphores Entry section <-- s.wait() Critical Section Exit section <-- s.release

Semaphore Example Event ordering is also possible (Previously seen with MPI_Send and MPI_Recv) Two processes P1 and P2 need to synchronize execution P1 must write before P2 reads //P1 write(x) s.signal() or signal(&s) //P2 s.wait() or wait(&s) read(x) //s must be initialized to zero as a binary semaphore

Example Code See course website for example of semaphores in C

Next Up Producer-Consumer problem / Bounded buffer problem – 2 processes (simplest version) – 1 producer makes data and puts it in the buffer one by one – 1 consumer - removes data items from the buffer one by one