Presentation is loading. Please wait.

Presentation is loading. Please wait.

Task Synchronization Prepared by: Jamil Alomari Suhaib Bani Melhem

Similar presentations


Presentation on theme: "Task Synchronization Prepared by: Jamil Alomari Suhaib Bani Melhem"— Presentation transcript:

1 Task Synchronization Prepared by: Jamil Alomari Suhaib Bani Melhem
Supervised by: Dr.Loai Tawalbeh

2 Outlines Background Producer Consumer Problem
The Critical-Section Problem Software Solutions Hardware Solutions Semaphore Deadlock and starvation Monitors Message Passing

3 Concurrent Execution Concurrent execution has to give the same results as serial execution. Concurrent execution with shared data leads us to speak about synchronization. To get data consistency we should have mechanism to avoid data inconsistency problem. Synchronization as embedded system topic we have to speak about producer consumer problem

4 Synchronizing Reasons
For getting shared access to resources (variables, buffers, devices, etc.) 2. For communicating Cases where cooperating processes need not synchronize to share resources: All processes are read only. All processes are write only. One process writes, all other processes read.

5 Producers Consumers Systems
One system produce items that will be used by other system Examples shared printer, the printer here acts the consumer, and the computers that produce the documents to be printed are the consumers. Sensors network, where the sensors here the producers, and the base stations (sink) are the producers.

6 Producer Consumer Problem
The producer-consumer problem illustrates the need for synchronization in systems where many processes share a resource. In the problem, two processes share a fixed-size buffer. One process produces information and puts it in the buffer, while the other process consumes information from the buffer. These processes do not take turns accessing the buffer, they both work concurrently. It is also called bounded buffer problem

7 Producer While(Items_number ==buffer size)
; //waiting since the buffer is full Buffer[i]=next_produced_item; i=(i+1)%Buffer_size; Items_number++;

8 Consumer while (Items_number == 0)
; // do nothing since the buffer is empty Consumed _item= buffer[j]; j = (j + 1) % Buffer_size; Items_number--;

9 Producer Consumer Problem
As RTL Item_number++ is implemented as the following: Register 1=Item_number; Register1=register1 +1; Item_number=register1; and Item_number-- in the same way using different register. Consider this execution interleaving with “count = 5” initially: S0: producer execute register1 = count {register1 = 5} S1: producer execute register1 = register {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4}

10 The Critical-Section Problem
n processes all competing to use some shared data Each process has a code segment, called critical section, in which the shared data is accessed. Problem: ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.

11 Solution to Critical-Section Problem
Mutual Exclusion: one process at a time gets in critical section. Progress: a process operating outside of its critical section cannot prevent other processes from entering their critical section, processes attempting to enter their critical sections simultaneously must decide which process enters eventually. Bounded Waiting: a process attempting to enter its critical region will be able to do so eventually.

12 Structure of process Pi
repeat             critical section             remainder section until false

13 Types of solution Software solutions
-Algorithms whose correctness dose not rely on any assumptions other than positive processing speed. Hardware solutions - Rely on some special machine instructions. Operating system solutions - Extending hardware solutions to provide some functions and data structure support to the programmer.

14 Software Solutions Initial Attempts to Solve Problem.
Only 2 processes, P0 and P1. General structure of process Pi (other process Pj ) repeat entry section critical section exit section remainder section until false; Processes may share some common variables to synchronize their actions.

15 Algorithm 1 Guarantees mutual exclusion.
int turn = 0; /* shared control variable */ Pi: /* i is 0 or 1 */ while (turn != i) ; /* busy wait */ CSi; turn = 1 - i; Guarantees mutual exclusion. Does not guarantee progress enforces strict alternation of processes entering CS's. Bounded waiting violated suppose one process terminates.

16 Algorithm 2 Remove strict alternation requirement
int flag[2] = { FALSE, FALSE /* flag[i] indicates that Pi is in its */ /* critical section */ while (flag[1 - i]) ; flag[i] = TRUE; CSi; flag[i] = FALSE; Mutual exclusion violated Progress ok. Bounded wait ok.

17 Algorithm 3 Restore mutual exclusion Guarantees mutual exclusion
int flag[2] = { FALSE, FALSE } /* flag[i] indicates that Pi wants to */ /* enter its critical section */ flag[i] = TRUE; while (flag[1 - i]) ; CSi; flag[i] = FALSE; Guarantees mutual exclusion Violates progress both processes could set flag and then deadlock on the while. Bounded waiting violated

18 Algorithm 4 Attempt to remove the deadlock
int flag[2] = { FALSE, FALSE } /* flag[i] indicates that Pi wants to */ flag[i] = TRUE; while (flag[1 - i]) { flag[i] = FALSE; delay; /* sleep for some time */ } CSi; Mutual exclusion guaranteed. Progress violated. Bounded waiting violated.

19 Peterson's Algorithm Satisfies all solution requirements
int flag[2] = { FALSE, FALSE } /* flag[i] indicates that Pi wants to */ /* enter its critical section */ int turn = 0; /* turn indicates which process has */ /* priority in entering its critical section */ flag[i] = TRUE; turn = 1 - i; while (flag[1 - i] && turn == 1 - i) ; CSi; flag[i] = FALSE; Satisfies all solution requirements

20 Hardware Solutions Many systems provide hardware support for critical section code Uniprocessors – could disable interrupts Currently running code would execute without preemption Generally too inefficient on multiprocessor systems Operating systems using this not broadly scalable Modern machines provide special atomic hardware instructions Atomic = non-interruptable Either test memory word and set value Or swap contents of two memory words

21 Solution using Fetch and Increment
int nextTicket = 0, serving = 0; mutexbegin() { int myTicket; myTicket = FAI(nextTicket); while (myTicket != serving) ; } mutexend() ++serving; Correctness proof: Mutual exclusion: contradiction. Assumption: no process ``plays'' with control variables. Progress: by inspection, there's no involvement from processes operating outside their CSs. Bounded waiting: myTicket at most n more than serving

22 TestAndSet boolean TestAndSet (boolean *target) {
boolean rv = *target; *target = TRUE; return rv: }

23 Solution using TestAndSet
Shared Boolean variable lock., initialized to false. Solution: while (true) { while ( TestAndSet (&lock )) ; /* do nothing // critical section lock = FALSE; // remainder section }

24 Semaphore Synchronization tool that does not require busy waiting
Semaphore is un integer flag, indicated that it is safe to proceed Two standard operations modify S: wait() and signal() Originally called P() and V() Less complicated Can only be accessed via two indivisible (atomic) operations wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++;

25 Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time Thus, implementation becomes the critical section problem where the wait and signal code are placed in the critical section. Could now have busy waiting in critical section implementation But implementation code is short Little busy waiting if critical section rarely occupied Note that applications may spend lots of time in critical sections and therefore this is not a good solution

26 Semaphore Implementation with no Busy waiting
With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: value (of type integer) pointer to next record in the list Two operations: block – place the process invoking the operation on the appropriate waiting queue. wakeup – remove one of processes in the waiting queue and place it in the ready queue.

27 Semaphore Implementation with no Busy waiting (Cont.)
Implementation of wait: wait (S){ value--; if (value < 0) { add this process to waiting queue block(); } } Implementation of signal: Signal (S){ value++; if (value <= 0) { remove a process P from the waiting queue wakeup(P); }

28 Semaphore In the Producer-Consumer problem, semaphores are used for two purpose: mutual exclusion and synchronization. In the following example there are three semaphores, full, used for counting the number of slots that are full; empty, used for counting the number of slots that are empty; and mutex, used to enforce mutual exclusion. BufferSize = 3; semaphore mutex = 1; // Controls access to critical section semaphore empty = BufferSize; // counts number of empty buffer slots, semaphore full = 0; // counts number of full buffer slots

29 Solution to the Producer-Consumer problem using Semaphores
{ while (TRUE) { make_new(item); wait(&empty); wait(&mutex); // enter critical section put_item(item); //buffer access signal(&mutex); // leave critical section signal(&full); // increment the full semaphore } }

30 Solution to the Producer-Consumer problem using Semaphores
Consumer() {while (TRUE) { wait(&full); // decrement the full semaphore wait(&mutex); // enter critical section remove_item(item); // take a widget from the buffer signal(&mutex); // leave critical section signal(&empty); // increment the empty semaphore consume_item(item); // consume the item }

31 Difficulties with Semaphore
Wait(s) and signal(s) are scattered among several processes therefore it is difficult to understand their effect. Usage must be correct in all the processes. One bad process or one program errore can kill the whole system.

32 Deadlock and starvation problems
Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Let S and Q be two semaphores initialized to 1 P0 P1 wait (S); wait (Q); wait (Q); wait (S); . . signal (S); signal (Q); signal (Q); signal (S); Starvation: indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended

33 Readers-Writers Problem
A data set is shared among a number of concurrent processes Readers – only read the data set; they do not perform any updates Writers – can both read and write. Problem – allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time. Shared Data Data set Semaphore mutex initialized to 1. Semaphore wrt initialized to 1. Integer readcount initialized to 0.

34 Readers-Writers Problem (Cont.)
The structure of a writer process while (true) { wait (wrt) ; // writing is performed signal (wrt) ; }

35 Readers-Writers Problem (Cont.)
The structure of a reader process while (true) { wait (mutex) ; readcount ++ ; if (readercount == 1) wait (wrt) ; signal (mutex) // reading is performed readcount - - ; if (redacount == 0) signal (wrt) ; signal (mutex) ; }

36 Dining-Philosophers Problem

37 Dining-Philosophers Problem (Cont.)
The structure of Philosopher i: While (true) { wait ( chopstick[i] ); wait ( chopStick[ (i + 1) % 5] ); // eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think }

38 Real Time Systems In real time systems the queue policy is not practical, so we follow higher priority policy. Shared systems we concern by fairness, but in real time we focus on stability, which mean the system has to meet the deadline, even if all deadlines cant be met. Priority Inversion: If a higher priority thread wants to enter the critical section while a lower priority thread is in the Critical Section, it must wait for the lower priority thread to complete.

39 Priority Inversion Consider the executions of four periodic threads: A, B, C, and D; two resources : Q and V Thread Priority Execution Sequence Arrival Time A EQQQQE 0 B EE 2 C EVVE 2 D EEQVE 4 Where E is executing for one time unit, Q is accessing resource Q for one time unit, V is accessing resource V for one time unit

40 Example

41 Priority Inheritance From the previous figure we can see that thread D has the higher priority and finished the last one, this is the problem of priority inversion ( the threads with medium priority suspend the higher priority threads) . Priority Inheritance: Let the lower priority task use the highest priority of the higher priority tasks it blocks. In this way, the medium priority tasks can no longer preempt low priority task , which has blocked the higher priority task.

42 Priority Inheritance

43 Priority Ceiling Priority Ceiling: is assigned to each mutex, which is equal to the highest priority task that may use this mutex.

44 Priority Ceiling

45 Synchronization with I/O devices
The I/O devices have 3 states: Idle: inactive, or no I/O occur Busy: accepting output, or generate input Done: ready for transaction Moving from state to other cause changing in the flag.

46 Synchronization Busy waiting loop: a software checks status flag in loop that doesn't exit until the flag set to one new data waiting for new data New input is Ready-done Waiting for input busy

47 Synchronization with no Buffering

48 Synchronization with Buffering

49 Synchronization with blind cycles

50 FIFO Queue

51 Synchronization-DMA

52 DMA

53 References http://phoenix.goucher.edu/~kelliher/cs42/sep27.html


Download ppt "Task Synchronization Prepared by: Jamil Alomari Suhaib Bani Melhem"

Similar presentations


Ads by Google