Chapter 6.3 Mutual Exclusion

Slides:



Advertisements
Similar presentations
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Distributed Mutual Exclusion.
Advertisements

CS 542: Topics in Distributed Systems Diganta Goswami.
Token-Dased DMX Algorithms n LeLann’s token ring n Suzuki-Kasami’s broadcast n Raymond’s tree.
Synchronization Chapter clock synchronization * 5.2 logical clocks * 5.3 global state * 5.4 election algorithm * 5.5 mutual exclusion * 5.6 distributed.
1 Algorithms and protocols for distributed systems We have defined process groups as having peer or hierarchical structure and have seen that a coordinator.
Page 1 Mutual Exclusion* Distributed Systems *referred to slides by Prof. Paul Krzyzanowski at Rutgers University and Prof. Mary Ellen Weisskopf at University.
Synchronization in Distributed Systems
Distributed Systems Spring 2009
1 Distributed Computing Algorithms CSCI Distributed Computing: everything not centralized many processors.
CS 582 / CMPE 481 Distributed Systems
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Distributed Snapshots –Termination detection Election algorithms –Bully –Ring.
Synchronization Clock Synchronization Logical Clocks Global State Election Algorithms Mutual Exclusion.
CS603 Process Synchronization February 11, Synchronization: Basics Problem: Shared Resources –Generally data –But could be others Approaches: –Model.
20101 Synchronization in distributed systems A collection of independent computers that appears to its users as a single coherent system.
Synchronization in Distributed Systems. Mutual Exclusion To read or update shared data, a process should enter a critical region to ensure mutual exclusion.
SynchronizationCS-4513, D-Term Synchronization in Distributed Systems CS-4513 D-Term 2007 (Slides include materials from Operating System Concepts,
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Chapter 18: Distributed Coordination (Chapter 18.1 – 18.5)
Synchronization in Distributed Systems CS-4513 D-term Synchronization in Distributed Systems CS-4513 Distributed Computing Systems (Slides include.
EEC-681/781 Distributed Computing Systems Lecture 11 Wenbing Zhao Cleveland State University.
EEC-681/781 Distributed Computing Systems Lecture 11 Wenbing Zhao Cleveland State University.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved Chapter 6 Synchronization.
1 A Mutual Exclusion Algorithm for Ad Hoc Mobile networks Presentation by Sanjeev Verma For COEN th Nov, 2003 J. E. Walter, J. L. Welch and N. Vaidya.
4.5 DISTRIBUTED MUTUAL EXCLUSION MOSES RENTAPALLI.
4.5 Distributed Mutual Exclusion Ranjitha Shivarudraiah.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms –Bully algorithm.
1 Mutual Exclusion: A Centralized Algorithm a)Process 1 asks the coordinator for permission to enter a critical region. Permission is granted b)Process.
Coordination and Agreement. Topics Distributed Mutual Exclusion Leader Election.
1 Distributed Process Management Chapter Distributed Global States Operating system cannot know the current state of all process in the distributed.
Global State (1) a)A consistent cut b)An inconsistent cut.
Synchronization CSCI 4780/6780. Mutual Exclusion Concurrency and collaboration are fundamental to distributed systems Simultaneous access to resources.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Fall 2007cs4251 Distributed Computing Umar Kalim Dept. of Communication Systems Engineering 15/01/2008.
Presenter: Long Ma Advisor: Dr. Zhang 4.5 DISTRIBUTED MUTUAL EXCLUSION.
Synchronization Chapter 5.
Lecture 10 – Mutual Exclusion Distributed Systems.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Mutual Exclusion & Leader Election Steve Ko Computer Sciences and Engineering University.
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Page 1 Mutual Exclusion & Election Algorithms Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content.
Synchronization in Distributed Systems Chapter 6.3.
COMP 655: Distributed/Operating Systems Summer 2011 Dr. Chunbo Chu Week 6: Synchronyzation 3/5/20161 Distributed Systems - COMP 655.
Distributed Mutual Exclusion Synchronization in Distributed Systems Synchronization in distributed systems are often more difficult compared to synchronization.
Mutual Exclusion Algorithms. Topics r Defining mutual exclusion r A centralized approach r A distributed approach r An approach assuming an organization.
CS3771 Today: Distributed Coordination  Previous class: Distributed File Systems Issues: Naming Strategies: Absolute Names, Mount Points (logical connection.
CSC 8420 Advanced Operating Systems Georgia State University Yi Pan Transactions are communications with ACID property: Atomicity: all or nothing Consistency:
Process Synchronization Presentation 2 Group A4: Sean Hudson, Syeda Taib, Manasi Kapadia.
Revisiting Logical Clocks: Mutual Exclusion Problem statement: Given a set of n processes, and a shared resource, it is required that: –Mutual exclusion.
Lecture 11: Coordination and Agreement Central server for mutual exclusion Election – getting a number of processes to agree which is “in charge” CDK4:
Coordination and Agreement
Introduction Many distributed systems require that participants agree on something On changes to important data On the status of a computation On what.
4.5 Distributed Mutual Exclusion
Distributed Systems CS
Distributed Mutex EE324 Lecture 11.
Chapter 12: Concurrency, Deadlock and Starvation
Chapter 6.3 Mutual Exclusion
Distributed Mutual Exclusion
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Outline Distributed Mutual Exclusion Introduction Performance measures
Distributed Systems CS
Concurrency: Mutual Exclusion and Process Synchronization
Lecture 10: Coordination and Agreement
Synchronization (2) – Mutual Exclusion
Prof. Leonardo Mostarda University of Camerino
Lecture 11: Coordination and Agreement
Distributed Systems and Concurrency: Synchronization in Distributed Systems Majeed Kassis.
Distributed Mutual eXclusion
Presentation transcript:

Chapter 6.3 Mutual Exclusion CSc8320 - Advanced Operating Systems Berkay Aydin September 23rd, 2015

Mutual Exclusion FUNDAMENTALS PART I FUNDAMENTALS Contents Introduction Centralized Algorithm Decentralized Algorithm Distributed Algorithm Token Ring Algorithm Summary and Comparison Mutual Exclusion PART I FUNDAMENTALS

Introduction Fundamental aspect of distributed systems Concurrency and collaboration among multiple processes Simultaneous access Prevention of corruption and inconsistency of resources Mutual exclusion Distributed algorithms Token-based Permission-based Fundamental to distributed systems is the concurrency and collaboration among multiple processes. In many cases, this also means that processes will need to simultaneously access the same resources. To prevent that such concurrent ac- cesses corrupt the resource, or make it inconsistent, solutions are needed to grant mutual exclusive access by processes. In this section, we take a look at some of the more important distributed algorithms that have been proposed. In token-based solutions mutual exclusion is achieved by passing a special message between the processes, known as a token. There is only one token available and who ever has that token is allowed to access the shared re- source. When finished, the token is passed on to a next process. If a process hav- ing the token is not interested in accessing the resource, it simply passes it on. Token-based solutions have a few important properties. First, depending on the how the processes are organized, they can fairly easily ensure that every proc- ess will get a chance at accessing the resource. In other words, they avoid starva- tion. Second, deadlocks by which several processes are waiting for each other to proceed, can easily be avoided, contributing to their simplicity. Unfortunately, the main drawback of token-based solutions is a rather serious one: when the token is lost (e.g., because the process holding it crashed), an intricate distributed proce- dure needs to be started to ensure that a new token is created, but above all, that it is also the only token. many distributed mutual exclusion algorithms follow a permission-based approach. In this case. a process wanting to access the resource first requires the permission of other processes. There are many different ways toward granting such a permission and in the sections that follow we will consider a few of them

Overview Centralized Algorithm Decentralized Algorithm Distributed Algorithm Token Ring Algorithm Summary and Comparison

Centralized Algorithm FUNDAMENTALS Contents Introduction Centralized Algorithm Decentralized Algorithm Distributed Algorithm Token Ring Algorithm Summary and Comparison

Centralized Algorithm Straightforward Simulation of a one-processor system One process is elected as the coordinator A queue is used for requests Controlled by the coordinator Easy to implement request, grant, release

Centralized Algorithm Method Select a coordinator For accessing a shared resource, ask permission from coordinator (request) If resource is not used, coordinator gives permission (grant) Else coordinator moves the request to the queue It can send a message for denying the permission Wait till the resource is available When finished using the resource, send a message to coordinator for releasing the resource (release)

Centralized Algorithm Image taken from (Tannenbaum&Steen, 2006)

Centralized Algorithm Advantages Easy to implement Processes do not wait forever (no starvation) Disadvantages Coordinator is single point of failure Single coordinator can be a performance bottleneck

Decentralized Algorithm FUNDAMENTALS Contents Introduction Centralized Algorithm Decentralized Algorithm Distributed Algorithm Token Ring Algorithm Summary and Comparison

Decentralized Algorithm Voting algorithm DHT-based algorithm Resource is replicated n times, and each resource has its own coordinator When needed access, a majority vote from coordinator shall be granted for accepting the request

Decentralized Algorithm Method n replicas of a resource has n coordinators All processes (clients) that want to access a resource send requests to each coordinator A coordinator grants a permission (lease) if it is not owned by anyone If it is owned, it sends a reject back to the client The process (client) who got the majority vote (m >n/2) gets the permission The other processes who lost the vote Release the acquired votes Retry after a random period of time

Decentralized Algorithm Possible shortcomings of the method Random reset may cause a coordinator to have a change of heart - Therefore, breaking the mutual exclusion Image taken from (Lin et al., 2004)

Decentralized Algorithm Possible shortcomings of the method Performance of the protocol drops when many clients want to access same resource. Nobody gets the majority vote, and utilization drops Image taken from (Lin et al., 2004)

Distributed Algorithm FUNDAMENTALS Contents Introduction Centralized Algorithm Decentralized Algorithm Distributed Algorithm Token Ring Algorithm Summary and Comparison

Distributed Algorithm Deterministic distributed mutual exclusion algorithm Requires total ordering (unambiguous) of all events in the system When a process wants to access a resource It builds a message containing: name of the resource, process number, current time Message sending assumed to be reliable

Distributed Algorithm Method Process creates a message for the resource (name, proc.no, time) Process sends it to everyone including itself When receiver (another process) gets the message, it can do following: If receiver does not want to access -> It sends OK If receiver already has access -> It queues the message If receiver also wants to access, but not yet done so -> It controls the time of the received message with the one that it had sent to others If receiver’s timestamp is larger -> It sends back OK Else it queues the message

Distributed Algorithm Image taken from (Tannenbaum&Steen, 2006) Process 0 sends everyone a request with timestamp 8, while at the same time, process 2 sends everyone a request with timestamp 12. Process 1 is not interested in the resource, so it sends OK to both senders. Processes 0 and 2 both see the conflict and compare timestamps. Process 2 sees that it has lost, so it grants per- mission to 0 by sending OK. Process 0 now queues the request from 2 for later processing and access the resource, as shown in Fig. 6-15(b). When it is finished, it removes the request from 2 from its queue and sends an OK message to process 2, allowing the latter to go ahead, as shown in Fig. 6-15(c). The algorithm works because in the case of a conflict, the lowest timestamp wins and everyone agrees on the ordering of the timestamps.

Distributed Algorithm Disadvantages n point of failures Group membership must be maintained All processes are involved in all decisions, bottleneck Improvements Point of failure - sender time out Majority vote for access

FUNDAMENTALS Contents Token Ring Algorithm Introduction Centralized Algorithm Decentralized Algorithm Distributed Algorithm Token Ring Algorithm Summary and Comparison

Token Ring Algorithm Ring configuration Token provides access to resources Token can only be handled by one node at a certain time Token circulates the ring, when one does not need the resources No starvation, but detecting a lost token is expensive Image taken from (Tannenbaum&Steen, 2006)

Summary and Comparison FUNDAMENTALS Contents Introduction Centralized Algorithm Decentralized Algorithm Distributed Algorithm Token Ring Algorithm Summary and Comparison

Summary and Comparison MESSAGES PER ENTRY The centralized algorithm is simplest and also most efficient. It requires only three messages to enter and leave a critical region: a request, a grant to enter, and a release to exit. In the decentralized case, we see that these messages need to be carried out for each of the m coordinators, but now it is possible that several attempts need to be made (for which we introduce the variable k). The distributed algorithm requires n - 1 request messages, one to each of the other processes, and an additional n - 1 grant messages, for a total of 2(n- 1). (We assume that only point-to-point communication channels are used.) With the token ring algorithm, the number is variable. If every process constantly wants to enter a critical region. then each token pass will result in one entry and exit, for an average of one mes- sage per critical region entered. DELAY before ENTRY The delay from the moment a process needs to enter a critical region until its actual entry also varies for the three algorithms. When the time using a resource is short, the dominant factor in the delay is the actual mechanism for accessing a re- source. When resources are used for a long time period, the dominant factor is waiting for everyone else to take their tum. In Fig. 6-17 we show the former case. It takes only two message times to enter a critical region in the centralized case, but 3mk times for the decentralized case, where k is the number of attempts that need to be made. Assuming that messages are sent one after the other, 2(n - 1) message times are needed in the distributed case. For the token ring, the time varies from 0 (token just arrived) to n - 1 (token just departed). Problems Finally, all algorithms except the decentralized one suffer badly in the event of crashes. Special measures and additional complexity must be introduced to avoid having a crash bring down the entire system. It is ironic that the distributed algorithms are even more sensitive to crashes than the centralized one. In a system that is designed to be fault tolerant, none of these would be suitable, but if crashes are very infrequent, they might do. The decentralized algorithm is less sensitive to crashes, but processes may suffer from starvation and special measures are needed to guarantee efficiency. Image taken from (Tannenbaum&Steen, 2006)

Group Mutual Exclusion CURRENT WORK Contents Group Mutual Exclusion Local Mutual Exclusion Mutual Exclusion PART II CURRENT WORK

Group Mutual Exclusion Each critical section (CS) has a type (or group) Processes requesting critical sections belonging the same group may execute their critical sections concurrently belonging to different group must execute in a mutually exclusive way

Group Mutual Exclusion Example: Multiple Reader/Single Writer (MRSW) problem Let n be the number of processes -> n+1 groups All read-critical sections belong to the same group All write-critical sections belong to different groups Example: Disk jukebox CDs can only be loaded one at a time. Processes can read from same CD when CD is loaded, others have to wait.

Group Mutual Exclusion Many algorithms are presented for the solution of GME Jayanti et al. (2003) - Fair Group Mutual Exclusion problem Mittal and Mohan (2007) - Group Mutual Exclusion when the access is not uniform (Priority-based)

CURRENT WORK Contents Group Mutual Exclusion Local Mutual Exclusion

Group Mutual Exclusion Each critical section (CS) has a type (or group) Processes requesting critical sections belonging the same group may execute their critical sections concurrently belonging to different group must execute in a mutually exclusive way

Group Mutual Exclusion Example: Multiple Reader/Single Writer (MRSW) problem Let n be the number of processes -> n+1 groups All read-critical sections belong to the same group All write-critical sections belong to different groups Example: Disk jukebox CDs can only be loaded one at a time. Processes can read from same CD when CD is loaded, others have to wait.

Local Mutual Exclusion Local mutual exclusion ensures that critical sections of neighboring nodes must be executed in exclusion with its neighbors (Attiya et al., 2010) Less restrictive than general mutual exclusion problem For mobile ad hoc networks (data collection from a particular region) Locality of a failure is important

PART III TRENDS AND FUTURE WORK Contents Trends A Futuristic Scenario Mutual Exclusion PART III TRENDS AND FUTURE WORK

Trends Combinations of mutual exclusion concepts Local Mutual Exclusion Group Mutual Exclusion k-Mutual Exclusion Local-Group, k-Local etc. Trend is to introduce a useful property such as, proximity, fault tolerance, selecting leaders etc.

Problem Solving the mutual exclusion for only a certain group of tasks and creating a self-correcting system Read-Write conflicts requires the need for mutual exclusiveness. The self-correcting system should have following properties: Logs every access and update Commands the affected nodes to return back to their previous state, or recovers them from their situation Returns back and corrects the all nodes

Example Credit card account - Balance is -$100 The interest rate is %10 and owner pays $100 just before the interest is applied. If not paid on time the credit card is blocked. The updates (logged and applied, in order) are: Balance <- Balance + 100 if(Balance < 0) then Balance <- Balance * 1.1 if (Balance < 0) then Block the card

Example When no mutual exclusion is enforced, B can be done before A; therefore creating a problem. The correcting system should be able to log everything and update when operations are not performed in order. Meaning, it should go back to initial state where balance is -$100; and update all the system accordingly This update should be frequent enough that no real effects should be observed in the real life

References Attiya, H., Kogan, A., & Welch, J. L. (2010). Efficient and robust local mutual exclusion in mobile ad hoc networks. Mobile Computing, IEEE Transactions on,9(3), 361-375. Jayanti, Prasad, Srdjan Petrovic, and King Tan. "Fair group mutual exclusion."Proceedings of the twenty-second annual symposium on Principles of distributed computing. ACM, 2003. Lin, S. D., Lian, Q., Chen, M., & Zhang, Z. (2005). A practical distributed mutual exclusion protocol in dynamic peer-to-peer systems. In Peer-to-Peer Systems III (pp. 11-21). Springer Berlin Heidelberg. Mittal, N., & Mohan, P. K. (2007). A priority-based distributed group mutual exclusion algorithm when group access is non-uniform. Journal of parallel and distributed computing, 67(7), 797-815. van Steen, M., & Tanenbaum, A. (2001). Distributed Systems, Principles and Paradigms. Vrije Universiteit Amsterdam, Holland, 1-2.