Presentation is loading. Please wait.

Presentation is loading. Please wait.

Process Synchronization Presentation 2 Group A4: Sean Hudson, Syeda Taib, Manasi Kapadia.

Similar presentations


Presentation on theme: "Process Synchronization Presentation 2 Group A4: Sean Hudson, Syeda Taib, Manasi Kapadia."— Presentation transcript:

1 Process Synchronization Presentation 2 Group A4: Sean Hudson, Syeda Taib, Manasi Kapadia

2 Outline Background Solution Requirements Types of Solutions Software Solutions Hardware Solutions Drawbacks Semaphore Classical Example Distributed System Topics Clock Concepts Lamport’s Logical Clock Algorithms Summary References

3 Background Process concurrency: May need to share data and resources. Unfortunately, this sharing may result in data inconsistency. To maintain data consistency requires mechanism to ensure the orderly execution of concurrent processes. Race condition: A situation where multiple processes attempt to read and then modify shared data concurrently. The final value of shared data depends upon which process finishes last. Example: Two processes accessing same variable in the memory. The result depends upon which process was executed last.

4 Background (cont.) Critical Section (CS): A portion of a process’ code where shared resources are accessed. The execution of CS must be mutually exclusive. Each process must request permission to enter a CS. Design a solution such that the action of processes does not depend on the order of execution. General structure of a process for mutual exclusion: do { // repeat forever Entry Section Exit Section } while(1);

5 How do processes synchronize in order to avoid race conditions?

6 Solution Requirements Mutual Exclusion: At most one process inside the CS. Progress and Bounded Wait: No process outside the CS should prevent another process from entering its CS. Process requesting to enter CS must not be delayed indefinitely. A process remain inside its CS for a finite time only.

7 Types of Solutions Busy Waiting: Processes check continuously if it is safe to enter CS. E.g. Peterson’s Algorithm, Test-and-Set instructions, etc. Block and Resume: Processes block when they attempt to enter a CS and another process is there already. When a process leaves a CS it allows the next waiting process to enter the CS. E.g. Semaphores, Monitors, etc.

8 Software Solution Peterson’s Algorithm: Process Pi do { flag[i] := true; // I want in turn = j; // but I let the other in while(flag[j] and turn = j) { do Nothing }; Critical Section flag[i] := false; // I no longer want in Remainder Section } while(1) Two Processes: Pi and Pj Initialization: flag[0] := flag[1] := true; turn := 0 or 1 Willingness to enter CS specified by flag[i]:= true; If both processes attempt to enter their CS simultaneously, only one turn value will last. Exit Section: specifies that Pi is unwilling to enter CS.

9 Drawbacks of Software Solutions Utilize busy waiting which consumes processor time. If a CS is long, it is more efficient to block processes that are waiting. Software solutions disable interrupts to ensure mutual exclusion at the cost of efficiency. Multiprocessor systems cannot disable interrupts; therefore, they can not guarantee mutual exclusion.

10 Hardware Solution Hardware prevents simultaneous accesses to a memory location. Test-and-Set instruction: Reading and Writing occurs in one atomic step. Works in multiprocessor and uni-processor system. bool TestSet(int& I) { if ( I = 0) I := 1; return true; else return false; } Shared variable: lock := false; Only the first Pi who sets bolt enter CS. Process Pi: do { while(TestSet(lock)); Critical Section lock := false; Remainder Section } while(1)

11 Drawbacks of Hardware Solution Does not prevent busy waiting by itself. Starvation can occur when a process exits a CS, and the selection of the next process to enter a CS is arbitrary. Does not prevent deadlock.

12 Semaphore Principle: “Two or more processes can cooperate by means of signals, such that a process can be forced to stop at specified place until it has received specific signal.” Avoids busy waiting: When a process has to wait, it will be put in a blocked queue of processes waiting for the same event. A semaphore is an integer variable which is initialized to a nonnegative value. Semaphore can only be accessed via two atomic and mutually exclusive operations: Signal(S): transmit a signal by semaphore S. Wait(S): receive a signal by semaphore S.

13 Semaphore (cont.) For the same semaphore S, two processes can not be inside Wait(S) and Signal(S) at the same time. Each CS should be bounded by Wait(S) and Signal(S). Example: Process Pi: repeat Wait(S) Critical Section Signal(S) Remainder Section forever

14 Drawbacks of Semaphores If wait() and signal() are scattered throughout the programs, it is difficult to understand their effects. Programmer error can lead a bad process to fail the entire collection of processes.

15 Producer-Consumer Example Each producer process produces items that are stored in a buffer. Each consumer process consumes items from a buffer in the order they are produced. Producer Consumer Shared Buffer

16 Producer-Consumer (cont.) Requirements: Producer should not attempt to put an item into the buffer when it is full. Consumer should not attempt to remove an item from the buffer when it is empty. Prevent overlap of buffer operations. Buffer: circular bounded buffer and can hold up to N items. Semaphores: Semaphore M to perform mutual exclusion on the buffer; only 1 process at a time can access the buffer. Semaphore F to synchronize producer and consumer on # of consumable item. Semaphore E to synchronize producer and consumer on the # of empty spaces.

17 Producer-Consumer (cont.) Initialization: in = 0; out = 0; Semaphore M = 1; F = 0; E = N Producer: repeat produce v; wait(E); wait(M); append(V); signal(M); signal(N); forever; append(v): buffer[in] := v; in := (in + 1) mod N; Consumer: repeat wait(N); wait(M); w := take(); signal(M); signal(E); consumer w; forever; take(): w := buffer[out]; out := (out + 1) mod N; return w;

18 Producer-Consumer (cont.) Producer: It performs wait(E) to make sure # of empty spaces is at least 1 It performs wait(M) before appending and signal(M) afterwards to prevent consumer access. It performs signal(N) after each append to increment N. Consumer: It must first wait(N) to see if there is an item to consume and then use wait(M) and signal(M) to access the buffer. It performs signal(E) to increment # of empty spaces.

19 Distributed System Topics Properties: Relevant information is scattered amount multiple nodes. Processes make decisions based on local information only. A single point of failure should be avoided. There is no common global clock.

20 Clock Concepts Clock tick: interrupts at regular time intervals. Clock skew: difference between two clock sources. Logical Clocks: Provide order of events instead of real-time clock. Clocks must have the same value even if it is not real time. Physical Clocks: Clocks must have the same value. Must not deviate from the real time more than a certain amount.

21 Lamport’s Logical Clock Guarantees the same event ordering on all nodes, but does not guarantee true time ordering. Happens-Before relation: two situations. a and b events in the same process, a occurs before b, then a b true. a is message sent event, b is same message received by a different process, then a b is true. Transitive of a b: if a b and b c, a c. Concurrency of events: There is no happens-before relation. No message exchange needed.

22 Lamport’s Algorithm Time value is assigned by C(x): If a b then C(a) < C(b). Clock always goes forward never backward. Minimum clock tick is 1. No events ever occur at exactly the same time: If a b in the same process, C(a) < C(b). If a and b represent sending and receiving of a message, C(a) < C(b). For all events a and b, C(a) != C(b).

23 Lamport’s Algorithm 0 1068 00 161220 182430 40 50 60 70 24 30 36 50 32 41 49 55 1 2 3

24 Mutual Exclusion Centralized Algorithm: An elected coordinator is assigned to control the critical section. The client must send a request to the coordinator to enter the critical section: The coordinator releases resources to the client based on availability. A queue is provided for the waiting clients when the resource is not available. Requires only three messages to enter and leave CS. Disadvantages: Single point of failure Coordinator becomes performance bottleneck in large system.

25 Mutual Exclusion (cont.) Distributed Algorithm: by Ricart and Agrawala Assumes Lamport’s logical clock. Initial request: Process sends everyone a CS request. The request is associated with a timestamp. Response: If the receiver is not in a CS and does not want to enter it, it sends back OK. If the receiver is in a CS, the request is queued. If the receiver wants to enter, the decision is based on the timestamp of request: the lowest timestamp wins.

26 Mutual Exclusion (cont.) Distributed Algorithm (cont.): Sender Process: Waits until all permissions are received. It enters critical section once all the permissions are in. After the sender exits the critical section, it sends OK to all processes on its queue and deletes them all from the queue. Drawbacks: Needs 2(n – 1) messages for each query which is worse than centralized algorithm. Single point of failure becomes n points of failure, so systems reliability is n times worse.

27 Summary

28 References –Casvant, Thomas and Singhal, Mukesh. Distributed Computing Systems. IEEE Press. –Mullender, Sape. Distributed Systems. ACM Press. –Stallings, William. Operating System. –Tanenbaum, Andrew. Distributed Operating System.


Download ppt "Process Synchronization Presentation 2 Group A4: Sean Hudson, Syeda Taib, Manasi Kapadia."

Similar presentations


Ads by Google