Real-Time Systems Lecture 4

Slides:



Advertisements
Similar presentations
1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Advertisements

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating Systems: Monitors 1 Monitors (C.A.R. Hoare) higher level construct than semaphores a package of grouped procedures, variables and data i.e. object.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
© Andy Wellings, 2004 Roadmap  Introduction  Concurrent Programming  Communication and Synchronization  Completing the Java Model  Overview of the.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Objectives Understand.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
02/23/2004CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Module 6: Process Synchronization.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
1 Chapter 2.3 : Interprocess Communication Process concept  Process concept  Process scheduling  Process scheduling  Interprocess communication Interprocess.
CSC321 Concurrent Programming: §5 Monitors 1 Section 5 Monitors.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Synchronization Background The Critical-Section.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Interprocess Communication (IPC) - Outline Problem: Race condition Solution: Mutual exclusion –Disabling interrupts; –Lock variables; –Strict alternation.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Chapter 6: Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Operating Systems Lecture Notes Synchronization Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Process Synchronization
Chapter 5: Process Synchronization
Background on the need for Synchronization
Chapter 5: Process Synchronization
Chapter 5: Process Synchronization
Chapter 6: Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
Semaphore Originally called P() and V() wait (S) { while S <= 0
Module 7a: Classic Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
CSE 153 Design of Operating Systems Winter 2019
Chapter 6: Synchronization Tools
Presentation transcript:

Real-Time Systems Lecture 4 Teachers: Olle Bowallius Phone: 790 44 42 Email: olleb@isk.kth.se Anders Västberg Phone: 790 44 55 Email: vastberg@kth.se

Synchronisation and Communication The correct behaviour of a concurrent program depends on synchronisation and communication between its processes Synchronisation: an action by one process only occurring after an action by another Communication: the passing of information from one process to another Concepts are linked since communication requires synchronisation, and synchronisation can be considered as content-less communication. Data communication is usually based upon either shared variables or message passing.

Independent Processes Two processes are independent if: Both processes only read shared variables. One process reads different variables than is written by the other process.

Process A process executes a sequence of statements. Each statement is implemented by a sequence of one or more atomic actions, which are actions that indivisably examine or change the program state. The execution of a concurrent program results in an interleaving of the sequences of atomic actions executed by each process. A particular execution of a concurrent program can be viewed as a history or a trace of the sequence of states. n processes and m atomic actions number of possible histories: (n*m)!/(m!n)

Reentrant Funktion A reentrant function can be used by one or more tasks without fear for data corruption Reentrant functions either use local variables or protect data when global variables are used void strcpy (char *dest, char *src) { while (*dest++ = *src++) ; *dest = ‘\0’; } Since copies of the arguments are placed on the task’s stack, several functions can call strcpy() without corrupting other’s pointers Non-reetrant function: int Temp; void swap (int *x, int *y) { Temp = *x; *x = *y; *y = Temp;

Atomic Actions y = z = 0; co; x = y+z;// y = 1; z = 2; oc; Load y and store value in a register Add z to the register. Store value in x. Final value can be 0, 1, 2 or 3 depending on which history As the three operations are not indivisible, two processes simultaneously updating a variable could follow an interleaving that would produce an incorrect result

Granularity Fine-grained atomic actions Course-grained atomic actions Implemented directly by hardware Course-grained atomic actions Provided by constructs which supports mutual exclusion.

Avoiding Interference The parts of a process that access shared variables must be executed indivisibly with respect to each other These parts are called critical sections The required protection is called mutual exclusion

Mutual Exclusion Atomicity is assumed to be present at the memory level. If one process is executing X:= 5, simultaneously with another executing X:= 6, the result will be either 5 or 6 (not some other value) If two processes are updating a structured object, this atomicity will only apply at the single word element level

Critical Section No two processes may be simultaneously inside their critical regions. No assumptions may be made about speeds or the number of CPUs No processes running outside its critical region may block other processes No process should have to wait forever to enter its critical region.

Cricital Section

Race Condition Two processes wants to write to the queue at the same time. Process A reads in=7 Process A store the value 7 in a local variable next_free_slot Process B reads in=7 Process B store the value 7 in a local variable next_free_slot Process B store its filename in place 7 and updates in=8 Process A store its filename in place 7 and updates in=8

Disable interrupts Mutual exclusion can be created by disabling interrupts Does not work for a multiprocessor system. Are not normally allowed by a user Should only be made during very short time periods in a RTS.

Condition Synchronisation Condition synchronisation is needed when a process wishes to perform an operation that can only sensibly, or safely, be performed if another process has itself taken some action or is in some defined state E.g. a bounded buffer has 2 condition synchronisation: the producer processes must not attempt to deposit data onto the buffer if the buffer is full the consumer processes cannot be allowed to extract objects from the buffer if the buffer is empty head tail

Busy Waiting One way to implement synchronisation is to have processes set and check shared variables that are acting as flags This approach works well for condition synchronisation but no simple method for mutual exclusion exists Busy wait algorithms are in general inefficient; they involve processes using up processing cycles when they cannot perform useful work Even on a multiprocessor system they can give rise to excessive traffic on the memory bus or network (if distributed)

Busy Wait (Spin Locks) process P1 { // Do something while(flag == down) ; // Do something other after process P2 } Process P2 flag = up; //signal to P1 // Do something other

Busy Wait (Mutual Exclusion) process P2 { while(true){ flag2 = up; while(flag1==up) ; //wait //Critical section flag2=down; //non-critical section } process P1 { while(true){ flag1 = up; while(flag2==up) ; //wait //Critical section flag1=down; //non-critical section } Possible Livelock ! Livelock is when two processes are executing while waiting for each other.

Busy wait (Mutual Exclusion) process P2 { while(flag1==up) ; //wait flag2=up; //Critical section flag2=down; //non-critical section } process P1 { while(flag2==up) ; //wait flag1=up; //Critical section flag1=down; //non-critical section } No mutual exclusion !

Busy wait process P2 process P1 { { while(turn==1) while(turn==2) //Critical section turn==1; //non-critical section } process P1 { while(turn==2) ; //wait //Critical section turn==2; //non-critical section } P1 and P2 must take turns in the critical section If P1fails in its critical section, then P2 will never enter its critical section.

Petersons Algoritm process P1 { flag1=up; turn=2; while(flag2==up ; //wait //Critical section flag1=down; //non-critical section } process P2 { flag2=up; turn=1; while(flag1==up && turn==1) ; //wait //Critical section flag2=down; //non-critical section }

Busy Wait Protocols that use busy loops are difficult to design, understand and prove correct. Testing programs may not examine rare interleavings that break mutual exclusion or lead to livelock. Busy-wait loops are inefficient

Semaphores A semaphore is a non-negative integer variable that apart from initialization can only be acted upon by two procedures P (or WAIT) and V (or SIGNAL) WAIT(S) If the value of S > 0 then decrement its value by one; otherwise delay the process until S > 0 (and then decrement its value). SIGNAL(S) Increment the value of S by one. WAIT and SIGNAL are atomic (indivisible). Two processes both executing WAIT operations on the same semaphore cannot interfere with each other and cannot fail during the execution of a semaphore operation

Condition synchronisation var consyn : semaphore (* init 0 *) process P1; (* waiting process *) statement X; wait (consyn) statement Y; end P1; process P2; (* signalling proc *) statement A; signal (consyn) statement B; end P2;

Mutual Exclusion sem mutex = 1; process P1 { process P2 wait(mutex); { //Critical section signal(mutex); //non-critical section } process P2 { wait(mutex); //Critical section signal(mutex); //non-critical section }

Barrier Synchronization sem arrive1 = 0; sem arrive2 = 0; process P1 { ... signal(arrive1); wait(arrive2); } process P2 { ... signal(arrive2); wait(arrive1); }

Producers and Consumers typeT buffer; sem empty = 1; sem full = 0; process Producer[i] { while(true) //produce data wait(empty); buffer = data; signal(full); } process Consumer[i] { while(true) //get data wait(full); result = buffer; signal(empty); }

Binary and quantity semaphores A general semaphore is a non-negative integer; its value can rise to any supported positive number A binary semaphore only takes the value 0 and 1; the signalling of a semaphore which has the value 1 has no effect - the semaphore retains the value 1

Bounded buffer typeT buffer; sem empty = n; sem full = 0; sem mutex = 1; process Producer { while(true) //producera data wait(empty); wait(mutex); insert(data); signal(mutex); signal(full); } process Consumer { while(true) //hämta data wait(full); wait(mutex); data = remove(); signal(mutex); signal(empty); }

Deadlock Circular Wait causes deadlock

Deadlock Deadlock if the buffer is full typeT buffer; sem empty = n; sem full = 0; sem mutex = 1; process Producer { while(true) //producera data wait(mutex); wait(empty); //fel ordning insert(data); signal(full); } Deadlock if the buffer is full process Consumer { while(true) //hämta data wait(full); wait(mutex); data = remove(); signal(empty); }

Criticisms of semaphores Semaphore are an elegant low-level synchronisation primitive, however, their use is error-prone If a semaphore is omitted or misplaced, the entire program to collapse. Mutual exclusion may not be assured and deadlock may appear just when the software is dealing with a rare but critical event A more structured synchronisation primitive is required No high-level concurrent programming language relies entirely on semaphores; they are important historically but are arguably not adequate for the real-time domain

Dining Philosophers Philosophers either eat or thinks. A philosopher needs two forks to be able to eat the spaghetti. When a philosopher gets hungry, she tries to acquiring her left and right fork, one at a time. How do you avoid deadlock or starvation?

Dining Philosophers sem fork[5] = {1, 1, 1, 1, 1}; //i=0 to 3 process Philospher[i] { while(true) wait(fork[i]);//get left wait(fork[i+1]);//get right //eat; signal(fork[i]); signal(fork[i+1]); //think; } process Philospher[4] { while(true) wait(fork[0]);//get right fork wait(fork[4]);// then left fork //eat; signal(fork[0]); signal(fork[4]); //think }

Liveness If a processes do not contain Livelocks Deadlocks Indefinite postponements (starvation). Then it is said to posses liveness

Semaphores in microC/OS-II OS_Event *DispSem; int main(void) { OSInit(); DispSem = OSCreate(1); OSStart(); } OS_Event *DispSem; void DispTask(void *pdata) { INT8U err; while(1) OSSemPend(DispSem, 0, &err); } OS_Event *DispSem; void TaskX(void *pdata) { INT8U err; while(1) err = OSSemPost(DispSem); } Also non-blocking OSSemAccept

Monitors Monitors provide encapsulation, and efficient condition synchronisation The critical regions are written as procedures and are encapsulated together into a single module All variables that must be accessed under mutual exclusion are hidden; all procedure calls into the module are guaranteed to be mutually exclusive Only the operations are visible outside the monitor

Condition Variables Different semantics exist In Hoare’s monitors: a condition variable is acted upon by two semaphore-like operators WAIT and SIGNAL A process issuing a WAIT is blocked (suspended) and placed on a queue associated with the condition variable (cf semaphores: a wait on a condition variable always blocks unlike a wait on a semaphore) A blocked process releases its hold on the monitor, allowing another process to enter A SIGNAL releases one blocked process. If no process is blocked then the signal has no effect (cf semaphores)

Readers/Writers Problem Block of Data reader writer How can monitors be used to allow many concurrent readers or a single writer but not both? Consider a file which needs mutual exclusion between writers and reader but not between multiple readers

Hint You will need to have an entry and exit protocol Reader: start_read . . . stop_read Writer: start_write . . . stop_write

Criticisms of Monitors The monitor gives a structured and elegant solution to mutual exclusion problems such as the bounded buffer It does not, however, deal well with condition synchronization — requiring low-level condition variables All the criticisms surrounding the use of semaphores apply equally to condition variables

Synchronized Methods Java provides a mechanism by which monitors can be implemented in the context of classes and objects There is a lock associated with each object which cannot be accessed directly by the application but is affected by the method modifier synchronized block synchronization. When a method is labeled with the synchronized modifier, access to the method can only proceed once the lock associated with the object has been obtained Hence synchronized methods have mutually exclusive access to the data encapsulated by the object, if that data is only accessed by other synchronized methods Non-synchronized methods do not require the lock and, therefore, can be called at any time

Example of Synchronized Methods public class SharedInteger { private int theData; public SharedInteger(int initialValue) theData = initialValue; } public synchronized int read() return theData; }; public synchronized void write(int newValue) theData = newValue; public synchronized void incrementBy(int by) theData = theData + by; SharedInteger myData = new SharedInteger(42);

Block Synchronization Provides a mechanism whereby a block can be labeled as synchronized The synchronized keyword takes as a parameter an object whose lock it needs to obtain before it can continue Hence synchronized methods are effectively implementable as public int read() { synchronized(this) { return theData; } Where this is the Java mechanism for obtaining the current object

Warning Used in its full generality, the synchronized block can undermine one of the advantages of monitor-like mechanisms, that of encapsulating synchronization constraints associate with an object into a single place in the program This is because it is not possible to understand the synchronization associated with a particular object by just looking at the object itself when other objects can name that object in a synchronized statement. However with careful use, this facility augments the basic model and allows more expressive synchronization constraints to be programmed

Static Data Static data is shared between all objects created from the class To obtain mutually exclusive access to this data requires all objects to be locked In Java, classes themselves are also objects and therefore there is a lock associated with the class This lock may be accessed by either labeling a static method with the synchronized modifier or by identifying the class's object in a synchronized block statement The latter can be obtained from the Object class associated with the object Note, however, that this class-wide lock is not obtained when synchronizing on the object

Static Data Could have used: class StaticSharedVariable { private static int shared; ... public int Read() synchronized(this.getClass()) return shared; }; } public void Write(int I) shared = I; Could have used: public static synchronized void Write(int I)

Waiting and Notifying To obtain conditional synchronization requires the methods provided in the predefined object class public void wait() throws InterruptedException; // also throws IllegalMonitorStateException public void notify(); // throws IllegalMonitorStateException public void notifyAll(); These methods should be used only from within methods which hold the object lock If called without the lock, the exception IllegalMonitor-StateException is thrown

Waiting and Notifying The wait method always blocks the calling thread and releases the lock associated with the object A wait within a nested monitor releases only the inner lock The notify method wakes up one waiting thread; the one woken is not defined by the Java language Notify does not release the lock; hence the woken thread must wait until it can obtain the lock before proceeding To wake up all waiting threads requires use of the notifyAll method If no thread is waiting, then notify and notifyAll have no effect

Thread Interruption A waiting thread can also be awoken if it is interrupted by another thread In this case the InterruptedException is thrown (see later in the course)

Condition Variables There are no explicit condition variables. An awoken thread should usually evaluate the condition on which it is waiting (if more than one exists and they are not mutually exclusive) public class BoundedBuffer { private int buffer[]; private int first; private int last; private int numberInBuffer = 0; private int size; public BoundedBuffer(int length) { size = length; buffer = new int[size]; last = 0; first = 0; };

Mutually exclusive waiting public synchronized void put(int item) throws InterruptedException { while (numberInBuffer == size) { wait(); }; last = (last + 1) % size ; // % is modulus numberInBuffer++; buffer[last] = item; notify(); public synchronized int get() throws InterruptedException while (numberInBuffer == 0) { first = (first + 1) % size ; // % is modulus numberInBuffer--; return buffer[first]; } Mutually exclusive waiting

Readers-Writers Problem Standard solution in monitors is to have two condition variables: OkToRead and OkToWrite This cannot be directly expressed using a single class public class ReadersWriters // first solution { private int readers = 0; private int waitingWriters = 0; private boolean writing = false;

Readers-Writers Problem public synchronized void StartWrite() throws InterruptedException { while(readers > 0 || writing) waitingWriters++; wait(); waitingWriters--; } writing = true; public synchronized void StopWrite() writing = false; notifyAll(); loop to re-test the condition Wakeup everyone

Readers-Writers Problem public synchronized void StartRead() throws InterruptedException { while(writing || waitingWriters > 0) wait(); readers++; } public synchronized void StopRead() readers--; if(readers == 0) notifyAll(); Arguably, this is inefficient as all threads are woken

Summary critical section — code that must be executed under mutual exclusion producer-consumer system — two or more processes exchanging data via a finite buffer busy waiting — a process continually checking a condition to see if it is now able to proceed livelock — an error condition in which one or more processes are prohibited from progressing whilst using up processing cycles deadlock — a collection of suspended processes that cannot proceed indefinite postponement (Starvation)— a process being unable to proceed as resources are not made available

Summary semaphore — a non-negative integer that can only be acted upon by WAIT and SIGNAL atomic procedures Structured primitives: monitors Suspension in a monitor is achieved using condition variable Monitors and condition variables can be implemented in microC/OS-II using semaphores and ADT. Java’s synchronized methods provide monitors within an object-oriented framework