Chapter 2/6 –Critical Section Problem / Mutual exclusion progress, bounded wait –Hardware Solution disable interrupts –problems ? –Software Solution busy.

Slides:



Advertisements
Similar presentations
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Distributed Mutual Exclusion.
Advertisements

Operating Systems Part III: Process Management (Process Synchronization)
CS542 Topics in Distributed Systems Diganta Goswami.
Suzuki Kasami Example. Consider 5 Sites S1 S2 S4 S3 S5 n=1 n=2 n=5 n=4 n=
Ricart and Agrawala’s Algorithm
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Token-Dased DMX Algorithms n LeLann’s token ring n Suzuki-Kasami’s broadcast n Raymond’s tree.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CS 582 / CMPE 481 Distributed Systems
6: Process Synchonization II 1 PROCESS SYNCHRONIZATION II THE BOUNDED BUFFER ( PRODUCER / CONSUMER ) PROBLEM: This is the same producer / consumer problem.
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Fall 2003 URL: Distributed Mutual Exclusion.
Distributed Mutual Exclusion
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Mutual Exclusion Steve Ko Computer Sciences and Engineering University at Buffalo.
4.5 DISTRIBUTED MUTUAL EXCLUSION MOSES RENTAPALLI.
4.5 Distributed Mutual Exclusion Ranjitha Shivarudraiah.
D ISTRIBUTED S YSTEM M UTUAL E XCLUSION UNIT-3. D ISTRIBUTED M UTUAL E XCLUSION  Concurrent access to a shared resource (critical region) by several.
CS425 /CSE424/ECE428 – Distributed Systems – Fall 2011 Material derived from slides by I. Gupta, M. Harandi, J. Hou, S. Mitra, K. Nahrstedt, N. Vaidya.
Coordination and Agreement. Topics Distributed Mutual Exclusion Leader Election.
Presenter: Long Ma Advisor: Dr. Zhang 4.5 DISTRIBUTED MUTUAL EXCLUSION.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Studying Different Problems from Distributed Computing Several of these problems are motivated by trying to use solutiions used in `centralized computing’
Mutual exclusion Ludovic Henrio CNRS - projet SCALE Distributed Algorithms.
ITEC452 Distributed Computing Lecture 6 Mutual Exclusion Hwajung Lee.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are: 1. Resource sharing 2. Avoiding concurrent update on shared data 3. Controlling the.
Synchronization: Distributed Mutual Exclusion (Week:6)
A Tree Based Algorithm fro Distributed Mutual Exclusion Addy Gronquist 11/19/2003.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Page 1 Mutual Exclusion & Election Algorithms Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content.
Lecture 12-1 Computer Science 425 Distributed Systems CS 425 / CSE 424 / ECE 428 Fall 2012 Indranil Gupta (Indy) October 4, 2012 Lecture 12 Mutual Exclusion.
Lecture 7- 1 CS 425/ECE 428/CSE424 Distributed Systems (Fall 2009) Lecture 7 Distributed Mutual Exclusion Section 12.2 Klara Nahrstedt.
Mutual Exclusion Algorithms. Topics r Defining mutual exclusion r A centralized approach r A distributed approach r An approach assuming an organization.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
CS3771 Today: Distributed Coordination  Previous class: Distributed File Systems Issues: Naming Strategies: Absolute Names, Mount Points (logical connection.
CSC 8420 Advanced Operating Systems Georgia State University Yi Pan Transactions are communications with ACID property: Atomicity: all or nothing Consistency:
Process Synchronization Presentation 2 Group A4: Sean Hudson, Syeda Taib, Manasi Kapadia.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Oct 1, 2015 Lecture 12: Mutual Exclusion All slides © IG.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
4.5 Distributed Mutual Exclusion
“Language Mechanism for Synchronization”
Mutual Exclusion Continued
Distributed Mutual Exclusion
Concurrency: Mutual Exclusion and Synchronization
Comparison between Suzuki Kasami’s and Raymond’s Tree Algorithm
Distributed Mutual Exclusion
ITEC452 Distributed Computing Lecture 7 Mutual Exclusion
Outline Distributed Mutual Exclusion Introduction Performance measures
Lecture 2 Part 2 Process Synchronization
Mutual Exclusion CS p0 CS p1 p2 CS CS p3.
Lecture 10: Coordination and Agreement
Synchronization (2) – Mutual Exclusion
Other Synchrnization Problems
Lecture 11: Coordination and Agreement
Distributed Systems and Concurrency: Synchronization in Distributed Systems Majeed Kassis.
Distributed Mutual eXclusion
CSE 542: Operating Systems
CSE 486/586 Distributed Systems Mutual Exclusion
CSE 542: Operating Systems
Presentation transcript:

Chapter 2/6 –Critical Section Problem / Mutual exclusion progress, bounded wait –Hardware Solution disable interrupts –problems ? –Software Solution busy wait ? –Tokens –Bakery algorithm –Special instructions (atomic test-set) Semaphores Monitors

Other Synchrnization Problems Dining Philosophers Producer Consumer Readers Writers –reader’s priority, writer’s priority

Readers/Writers with R priority Reader P(mutex) if (nr == 0) { nr++; P(notaccessed); } else nr++; V(mutex); // Read Operations P(mutex); nr --; if (nr == 0) V(notaccessed); V(mutex); Writer P(exclw); P(notaccessed); //Write Operations V(notaccesed); P(exclw);

Serializers –Monitor Problems If monitor encapsulates resource, then concurrency is reduced even where it is possible If resource is outside, then rouge processes can bypass the monitor. –Serializers try to avoid this: They are still an ADT with defined operations that encapsulate data, and enforce mutual exclusion. Procedures ma have “hollow” regions where they may allow other processes to access the serializer. –join-crowd (crowdid) then body end –enque (prio,qname) until (condition) all events that gain and release the serializer are totally ordered.

Serializer to solve Readers/Writers Read Enque (rq) until empty(wcrowd) Joincrowd(rc) then //Read operation end Write Enque (wq) until ( empty(wc) && empty(rc) && empty(rq) ) Joincrowd (wc) then //Write Operation end

Path Expressions –Defines possible “valid” execution histories of the operations Sequencing: a;b – a precedes b, no concurrency. Selection: a+b – either a or b is done, but not both and in any order. Concurrency: {a} – any number of instances of a can be done at the same time. –Path {read} + write end gives a weak reader’s priority solution.

CSP –P2?v Get the value of v from P2 as an input –P1!10 Output value 10 to P1 –The input and output are synchronized if they name each other as source/destination, and the types match –G-> CL – execute commands in list CL if guard G is true. –Alternative command – execute one of the choices where is guard is true. G1 -> CL1 o G2 -> CL2 … o … Gn -> CLn –Repetitive Command *[Alternative] – repeat until all guards are false.

Ch6 –In a distributed system, a site can either be requesting CS execution, executing CS, or none of the above. –Requirements for solutions: Deadlock free, starvation free, Fair, Fault tolerant –Metrics of performance (loading conditions) # of messages needed for CS Synch. Delay – time between one site leaving CS and another entering. Response time – Time interval between CS request and end of CS Throughput: rate at which system executes CS. –1 / (snych. delay + CS execution time)

Solutions –Centralized approach: Make a single site responsible for permissions. Needs only 3 messages / CS (which 3 ?) Single point of failure, load on central site, 2T synch. Delay –Lamports algorithm (non token based, FIFO delivery) When Si needs CS, it sends REQ(tsi, i) to all sites in its request set., and places it in its request queue. A site Sj which receives this places it in its own queue, and sends a timestamped REPLY message Si can enter CS when –Its request is as the top of the queue –It has a reply from all sites it sent a message to with timestamp > timestamp of request Upon exiting CS, removes its request, and sends a release message to all sites. Each receiving site dequeues the request as well

Does it work ? –Can Prove by contradiction Basically this means that a process entered CS even though a request from another process with lower timestamp was in its queue. –Requires 3(n-1) messages / CS, sd is T –Improvement – Ricart-Agrawala Algorithm A request is sent just as in Lamport’s algo. On receiving a request, a reply is sent if this site is neither executing its CS nor requesting it. Otherwise, timestamps are compared and a reply sent if the received tstamp is lower than the local tstamp. Otherwise defer. Enter CS when reply received from all. Upon exiting CS, send replies to defered sites. –Note that once I have clearance to go into CS, I can do so many times as long as I don’t send back reply.

Maekawa’s Algo. –Each site’s request set is constructed so that Intersection of request set for any pair of sites is not null Each site is in its own request set The request set size is K for any site. Each site is contained in K sets (K = sqrt(N)) –To request Site Si sends REQ(i) to all sites in its request set. On receiving the request, Sj will send REPLY(j) if it hasn’t sent a reply to anyone since it got the last release. Otherwise hold. –To Execute CS When you get all Replies –To Release CS Send Release(i) to all sites in request set. When Sj gets release message, it sends reply to next waiting request.

–Need 3*sqrt(N) messages, 2*T synch. delay. –Problem – deadlock can occur Imagine a situation with three sites each requesting CS. –Solution – prioritize request using timestamps and do some extra processing. Basically, eliminate circular wait. Site will send a failure message if it can’t honor your request. If a site is locked, but receives a request from a site with higher priority, it “inquires” from the locking site to see if the lock can be released. Message traffic now 5*sqrt(N)

Token Based –Suzuki Kasami Broadcast Algorithm: Basically, need a token to get into CS. Site possesing the token can get into CS repeatedly. RN is an array of integers denoting the largest number in request sequence from a site. The token itself has an array LN containing sequence number of most recently executed request and a queue Q of requesting sites. –Request If requesting site does not have token, it increments RNi[i] and sends REQ(i, RNi[i]) to everyone else. When Sj receives this, it updates RNj[i]. If it has idle token it sends it to Si –CS is executed when token is received –Release Set LN[i] to RNi[i]. If RNi[j] = LN[j]+1, then Sj is appended to token Q If token queue is nonempty, delete top entry and send token to that site. This makes it “non-symmetric” –Messages is 0 or N, Snych. delay is 0 or T.

Raymond’s Tree Based Algo. –The site with the token is the root of a tree. Each node has a variable called holder pointing to parent. Each node also has a r-q that contains requests for tokens from children. –Request To request, send request to parent if your r_q is empty and add yourself to the r_q When you get a request, add to r_q and forward to parent if you have not sent a previous request. When root site gets request, it sends token to requesting site and sets holder to point to that site. When site gets a token, it deletes top entry from r_q, sends token and points holder. If r_q is nonempty, it sends request to holder. –Execute When get the token and your request at top of r_q –Release If r_q is nonempty, delete top entry, send token,point holder. If r_q still nonempty, send request to holder. –Message complexity is O(logN), Synch. Delay is (T log N) /2 Do Section 6.14