Distributed Txn Management, 2003Lecture 3 / 4.11. Distributed Transaction Management – 2003 Jyrki Nummenmaa

Slides:



Advertisements
Similar presentations
Two phase commit. Failures in a distributed system Consistency requires agreement among multiple servers –Is transaction X committed? –Have all servers.
Advertisements

Global States.
TRANSACTION PROCESSING SYSTEM ROHIT KHOKHER. TRANSACTION RECOVERY TRANSACTION RECOVERY TRANSACTION STATES SERIALIZABILITY CONFLICT SERIALIZABILITY VIEW.
6.852: Distributed Algorithms Spring, 2008 Class 7.
CS542: Topics in Distributed Systems Distributed Transactions and Two Phase Commit Protocol.
Synchronization Chapter clock synchronization * 5.2 logical clocks * 5.3 global state * 5.4 election algorithm * 5.5 mutual exclusion * 5.6 distributed.
(c) Oded Shmueli Distributed Recovery, Lecture 7 (BHG, Chap.7)
CS 603 Handling Failure in Commit February 20, 2002.
Nummenmaa & Thanish: Practical Distributed Commit in Modern Environments PDCS’01 PRACTICAL DISTRIBUTED COMMIT IN MODERN ENVIRONMENTS by Jyrki Nummenmaa.
1 ICS 214B: Transaction Processing and Distributed Data Management Lecture 12: Three-Phase Commits (3PC) Professor Chen Li.
Consensus Algorithms Willem Visser RW334. Why do we need consensus? Distributed Databases – Need to know others committed/aborted a transaction to avoid.
CIS 720 Concurrency Control. Timestamp-based concurrency control Assign a timestamp ts(T) to each transaction T. Each data item x has two timestamps:
Computer Science Lecture 18, page 1 CS677: Distributed OS Last Class: Fault Tolerance Basic concepts and failure models Failure masking using redundancy.
Termination and Recovery MESSAGES RECEIVED SITE 1SITE 2SITE 3SITE 4 initial state committablenon Round 1(1)CNNN-NNNN Round 2FAILED(1)-CNNN--NNN Round 3FAILED.
Database Replication techniques: a Three Parameter Classification Authors : Database Replication techniques: a Three Parameter Classification Authors :
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 3 – Distributed Systems.
Systems of Distributed Systems Module 2 -Distributed algorithms Teaching unit 3 – Advanced algorithms Ernesto Damiani University of Bozen Lesson 6 – Two.
Group Communications Group communication: one source process sending a message to a group of processes: Destination is a group rather than a single process.
Computer Science Lecture 17, page 1 CS677: Distributed OS Last Class: Fault Tolerance Basic concepts and failure models Failure masking using redundancy.
Non-blocking Atomic Commitment Aaron Kaminsky Presenting Chapter 6 of Distributed Systems, 2nd edition, 1993, ed. Mullender.
The Atomic Commit Problem. 2 The Problem Reaching a decision in a distributed environment Every participant: has an opinion can veto.
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
1 CS 194: Distributed Systems Distributed Commit, Recovery Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering.
©Silberschatz, Korth and Sudarshan19.1Database System Concepts Distributed Transactions Transaction may access data at several sites. Each site has a local.
1 More on Distributed Coordination. 2 Who’s in charge? Let’s have an Election. Many algorithms require a coordinator. What happens when the coordinator.
CS 425 / ECE 428 Distributed Systems Fall 2014 Indranil Gupta (Indy) Lecture 18: Replication Control All slides © IG.
1 ICS 214B: Transaction Processing and Distributed Data Management Distributed Database Systems.
Distributed Commit. Example Consider a chain of stores and suppose a manager – wants to query all the stores, – find the inventory of toothbrushes at.
CMPT 401 Summer 2007 Dr. Alexandra Fedorova Lecture XI: Distributed Transactions.
CMPT Dr. Alexandra Fedorova Lecture XI: Distributed Transactions.
Distributed Systems Fall 2009 Distributed transactions.
CMPT Dr. Alexandra Fedorova Lecture XI: Distributed Transactions.
Transaction. A transaction is an event which occurs on the database. Generally a transaction reads a value from the database or writes a value to the.
Commit Protocols. CS5204 – Operating Systems2 Fault Tolerance Causes of failure: process failure machine failure network failure Goals : transparent:
Distributed Deadlocks and Transaction Recovery.
Distributed Commit Dr. Yingwu Zhu. Failures in a distributed system Consistency requires agreement among multiple servers – Is transaction X committed?
CS162 Section Lecture 10 Slides based from Lecture and
Distributed Transactions March 15, Transactions What is a Distributed Transaction?  A transaction that involves more than one server  Network.
DISTRIBUTED SYSTEMS II AGREEMENT (2-3 PHASE COM.) Prof Philippas Tsigas Distributed Computing and Systems Research Group.
Transaction Communications Yi Sun. Outline Transaction ACID Property Distributed transaction Two phase commit protocol Nested transaction.
Distributed Transactions Chapter 13
Distributed Txn Management, 2003Lecture 4 / Distributed Transaction Management – 2003 Jyrki Nummenmaa
Consensus and Its Impossibility in Asynchronous Systems.
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Concurrency Control Steve Ko Computer Sciences and Engineering University at Buffalo.
Operating Systems Distributed Coordination. Topics –Event Ordering –Mutual Exclusion –Atomicity –Concurrency Control Topics –Event Ordering –Mutual Exclusion.
University of Tampere, CS Department Distributed Transaction Management Jyrki Nummenmaa
Distributed Transaction Management, Fall 2002Lecture Distributed Commit Protocols Jyrki Nummenmaa
Fault Tolerance CSCI 4780/6780. Distributed Commit Commit – Making an operation permanent Transactions in databases One phase commit does not work !!!
University of Tampere, CS Department Distributed Commit.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved Chapter 8 Fault.
XA Transactions.
Commit Algorithms Hamid Al-Hamadi CS 5204 November 17, 2009.
Distributed Transactions Chapter – Vidya Satyanarayanan.
Committed:Effects are installed to the database. Aborted:Does not execute to completion and any partial effects on database are erased. Consistent state:
Two-Phase Commit Brad Karp UCL Computer Science CS GZ03 / M th October, 2008.
Revisiting failure detectors Some of you asked questions about implementing consensus using S - how does it differ from reaching consensus using P. Here.
Multi-phase Commit Protocols1 Based on slides by Ken Birman, Cornell University.
Distributed Transactions What is a transaction? (A sequence of server operations that must be carried out atomically ) ACID properties - what are these.
Topics in Distributed Databases Database System Implementation CSE 507 Some slides adapted from Navathe et. Al and Silberchatz et. Al.
Distributed Databases – Advanced Concepts Chapter 25 in Textbook.
Recovery in Distributed Systems:
Database System Implementation CSE 507
Two phase commit.
Commit Protocols CS60002: Distributed Systems
Outline Announcements Fault Tolerance.
Distributed Transactions
Distributed Databases Recovery
CIS 720 Concurrency Control.
Last Class: Fault Tolerance
Transaction Communication
Presentation transcript:

Distributed Txn Management, 2003Lecture 3 / Distributed Transaction Management – 2003 Jyrki Nummenmaa

Distributed Txn Management, 2003Lecture 3 / Theorem: The snapshot algorithm selects a consistent cut n Proof. Let e i and e j be events, which occurred on sites S i and S j and e j < H e i. Assume that e i was in the cut produced by the snapshot algorithm. We want to show that also e j is in the cut. If S i = S j, this is clear. Assume that S i  S j. Assume, now, for contradiction, that e j was not in the cut. Consider the sequence of messages m 1 m 2 …m k by which e j < H e i. By the way snap markers are sent and received, a snap marker message has reached S j before each m 1 m 2 …m k and S i has therefore recorded its state before e j. Therefore, e i is not in the cut. This is a contradiction, and we are done.

Distributed Txn Management, 2003Lecture 3 / A picture relating to the theorem SiSi SjSj m1m1 m2m2 mkmk ejej eiei e j < H e i Assume for contradiction -> … Conclusion: S i must have recorded its state before e j and e i can not be in the cut -> A contradiction.

Distributed Txn Management, 2003Lecture 3 / Some observations on the example LockManager implementation

Distributed Txn Management, 2003Lecture 3 / Example lock management implementation - concurrency n A lock manager uses threads to serve incoming connections. n As threads access the lock manager’s services concurrently, those methods are qualified to be synchronized.There may actually be an overkill, as it seems to be enough to synchronize the public methods (xlock/slock/unlock). I will still have a look at that. n An interactive client uses a single thread, but this is just to access the sleep method. There is no concurrent data access and consequently no synchronized methods.

Distributed Txn Management, 2003Lecture 3 / Example lock management implementation – input/output n Each LMMultiServerThread talks with only one client. n Each client talks to several servers (LMMultiServerThreads). n An interactive client talks also listens to a client controller, but it does not read standard input (this would block, or if you just check input with.ready() then user input is not echoed on the screen :( n A client controller only sends reads standard input and sends the messages to an interactive client.

Distributed Txn Management, 2003Lecture 3 / Example lock management implementation – server efficiency n It would be more efficient to pick threads from a pool of idle threads rather than always creating a new one. n The data structures are not optimized. As there typically is a large number of data items and only some unpredictable number of them is locked at any one moment, a hashing data structure might be suitable for the lock table (like HashTable, but a hash function must exist for the hash data type, e.g. Integer is ok). n It would be ok to connect a lock request queue (e.g. a linked list) for each lock item in the lock table with a direct reference.

Distributed Txn Management, 2003Lecture 3 / Example lock management implementation – txn ids n Currently integers are used. n Giving global txn ids sequentially (over all sites) is either complicated or needs a centralised server -> it is better to use a local (site-id, local-txn-id) pair to construct the txn ids. If the sites are known in advance, they can be numbered. n We only care about txns accessing some resource manager’s services and there may be a fixed set of them -> ask the first server to talk with for an id (server-id, local-txn-id).

Distributed Txn Management, 2003Lecture 3 / How distributed txns are born? n Distributed database management systems: a client contacts local server, which takes care of transparent distribution – the client does not need to know about the other sites. n Alternative: a client contacts all local resource managers directly. n Alternative: separate client programs are started on all respective sites. n Typically the client executing on the first site acts as a coordinator communicating with the other clients.

Distributed Txn Management, 2003Lecture 3 / Distributed Commit

Distributed Txn Management, 2003Lecture 3 / Ending the transaction n Finally, a txn is to either commit, in which case its updates are made permanent on all sites, or the txn is rolled back, in which case none of its updates are made permanent. n The servers must agree on the fate of the transaction. For this, the servers negotiate (vote). n The local servers participating in the commit process are called participants.

Distributed Txn Management, 2003Lecture 3 / Failure model – servers / 1 n In our work on atomic commitment protocols, we make the assumption that a server is either working correctly or not at all. n In our model, a server never performs incorrect actions. Of course, in reality servers do produce incorrect behaviour. n A partial failure is a situation where some servers are operational while others are down. n Partial failures can be tricky because operational servers may be uncertain about the status of failed servers. n The operational servers may become blocked, i.e. unable to commit or roll back a txn until such uncertainty is resolved.

Distributed Txn Management, 2003Lecture 3 / Failure model – servers / 1 n The reason why partial failures are so tricky to handle is that operational servers may be uncertain about the state of the servers on failed sites. An operational server may become blocked, unable to commit or rollback the txn. Thus it will have to hold locks on the data items that have been accessed by the txn, thereby preventing other txns from progressing. n Not surprisingly, an important design goal for atomic commitment protocols is to minimise the effect of one site's failure on another site's ability to continue processing. n A crashed site may recover.

Distributed Txn Management, 2003Lecture 3 / Failure model – network n Messages may get lost. n Messages may be delayed. n Messages are not changed. n Communication between two sites may not be possible for some time. This is called network partitioning and this assumption is sometimes relaxed, since it creates hard problems (more on this later). n The network partitioning may get fixed.

Distributed Txn Management, 2003Lecture 3 / Detecting Failures by Timeouts n We assume that the only way that server A can discover that it cannot communicate with server B is through the use of timeouts. n A sends a message to B and waits for a reply within the timeout period. n If a reply arrives then clearly A and B can communicate. n If the timeout period elapses and A has not received a reply then A concludes that it cannot communicate with B.

Distributed Txn Management, 2003Lecture 3 / Detecting Failures by Timeouts n Choosing a reasonable value for the timeout period can be tricky. n The actual time that it takes to communicate will depend on many hard-to-quantify variables, including the physical characteristics of the servers and the communication lines, the system load and the message routing techniques.

Distributed Txn Management, 2003Lecture 3 / An atomic commit protocol n An atomic commit protocol (`ACP') ensures consistent termination even in the presence of partial failures. n It is not enough if the coordinator just tells the other participants to commit. n For example, one of the participants may decide to roll back the txn. n This fact must be communicated to the other participants because they, too, must roll back.

Distributed Txn Management, 2003Lecture 3 / An atomic commit protocol n An atomic commit protocol is an algorithm for the coordinator and participants such that either the coordinator and all of the participants commit the txn or they all roll it back. n The coordinator has to find out if the participants can all commit the txn. It asks the participants to vote on this. n The ACP should have the following properties.

Distributed Txn Management, 2003Lecture 3 / Requirements for distributed commit (1-4) n (1) A participant can vote Yes or No and may not change the vote. n (2) A participant can decide either Abort or Commit and may not change it. n (3) If any participant votes No, then the global decision must be Abort. n (4) It must never happen that one participant decides Abort and another decides Commit.

Distributed Txn Management, 2003Lecture 3 / Requirements for distributed commit (5-6) n (5) All participants, which execute sufficiently long, must eventually decide, regardless whether they have failed earlier or not. n (6) If there are no failures or suspected failures and all participants vote Yes, then the decision must not be Abort. The participants are not allowed to create artificial failures or suspicion.

Distributed Txn Management, 2003Lecture 3 / Observations n Having voted No, a participant may unilaterally rollback. (One No vote implies global rollback.) n However, having voted Yes, a participant may not unilaterally commit. (Somebody else may have voted No and then rolled back.) n Note that it is possible that all participants vote to Commit, and yet the decision is to Rollback.

Distributed Txn Management, 2003Lecture 3 / Observations n Note that condition AC1 does not require that all participants reach a decision: some servers may fail and never recover. n We do not even require that all participating servers that remain operational reach a decision. n However, we do require that all participating servers be able to reach a decision once failures are repaired

Distributed Txn Management, 2003Lecture 3 / Uncertainty period n The period between voting to commit and receiving information about the overall decision is called the uncertainty period for the participating server. n During that period, we say that the participant is uncertain. n Suppose that a failure in the interconnection network disables communication between a participant, P, and all other participants and the coordinator, while P is uncertain. Then P cannot reach a decision until after the network failure has been repaired.

Distributed Txn Management, 2003Lecture 3 / Distributed Two-Phase Commit Protocol (2PC)

Distributed Txn Management, 2003Lecture 3 / Basic 2PC n Many DBMS products use the two-phase commit protocol (2PC) as the process by which txns that use multiple Servers commit in a consistent manner. n Typically, the majority of txns in an application do not use multiple servers. Such txns are unaffected by 2PC. n Txns that write-access data on multiple servers commit in two phases: a voting phase and a decision phase.

Distributed Txn Management, 2003Lecture 3 / Basic 2PC n First, all of the servers vote on the txn, indicating whether they are able to commit or not. n A server may vote No if, for example, it has run out of disk space or if it must pre-emptively rollback the txn to break a deadlock. n If all of the votes are Yes, the decision is made to commit, and all of the servers are told to commit.

Distributed Txn Management, 2003Lecture 3 / PC for Distributed Commit Coordinator Vote-Request Yes or No votes Multicast decision: Commit, if all voted Yes, otherwise Abort. Participants

Distributed Txn Management, 2003Lecture 3 / PC Coordinator n Initiate voting by sending Vote-Req to all participants and enters a wait-state. n Having received one No or a timeout in wait- state, decides Abort and sends the decision to all participants. n Having received a Yes from all in wait-state (before timing out), decides Commit and sends the decision to all participants.

Distributed Txn Management, 2003Lecture 3 / PC Participant n Having received a Vote-Req, either - sends a No and decides Rollback locally - sends a Yes and enters a wait-state. n If in wait-state receives either Rollback or Commit, decides accordingly. If it times out in wait-state, it concludes that the coordinator is down and starts a termination protocol – more on this soon.

Distributed Txn Management, 2003Lecture 3 / Multicast ABORT 2PC - a timeout occurs Coordinator Vote-Request Timeout occurs Yes or No votes Participants

Distributed Txn Management, 2003Lecture 3 / Centralised vs. Decentralised n There is also a version of 2PC whereby there is no coordinator and the servers all send messages to each other. This is called decentralised 2PC. n 2PC with a coordinator is called centralised 2PC. n We shall exemplify their behaviour with a simulation applet at

Distributed Txn Management, 2003Lecture 3 / Timeout actions n We assume that the participants and the coordinator try to detect failures with timeouts. n If a participant times out before Vote-Req, it can decide to Rollback. n If the coordinator times out before sending out the outcome of the vote to anyone, it can decide to rollback. n If a participant times out after voting Yes, it must use a termination protocol.

Distributed Txn Management, 2003Lecture 3 / Termination protocol n If a participant times out having voted Yes, it must consult the others about the outcome. Say, P is consulting Q. n If Q has received the decision from the coordinator, it tells it to P. n If Q has not yet voted, it can decide to Rollback. n If Q does not know, it can’t help. n If no-one can help P, P is blocked.

Distributed Txn Management, 2003Lecture 3 / Recovery n When a crashed participant recovers, it checks its log. n P can recover independently, if P recovers l having received the outcome from the coordinator, l not having voted, or l having decided to rollback. n Otherwise, P must consult the others, similarly as in the termination protocol. n Note: the simulation applet does not contain termination and recovery!

Distributed Txn Management, 2003Lecture 3 / PC Analysis (1) A participant can vote Yes or No and may not change the vote. - This should be clear. n (2) A participant can decide either Abort or Commit and may not change it. - This should also be clear.

Distributed Txn Management, 2003Lecture 3 / PC Analysis (3) If any participant votes No, then the global decision must be Abort. - Note that the global decision is Commit only when everyone votes Yes, and since no-one votes both Yes and No, this should be clear. n (4) It must never happen that one participant decides Abort and another decides Commit. - On similar lines than (3).

Distributed Txn Management, 2003Lecture 3 / PC Analysis n (5) All participants, which execute sufficiently long, must eventually decide, regardless whether they have failed earlier or not. - Only holds, if we assume that failed sites always recover and they do it for sufficiently long so that the 2PC recovery can make progress. (6) If there are no failures or suspected failures and all participants vote Yes, then the decision must not be Abort. The participants are not allowed to create artificial failures or suspicion. - Ok.

Distributed Txn Management, 2003Lecture 3 / PC Analysis n Sometimes there is intereste towards the number of messages exchanged. In the simplest case, the 2PC involves 3N messages. n Recovery and termination change these figures. n However, the assumption is that most txns terminate normally.

Distributed Txn Management, 2003Lecture 3 / What does the coordinator write to a log? n When the coordinator sends Vote-req, it writes a start-2PC record and a list containing the identities of the participants to its log. It also sends this list to each participant at the same time as the Vote-req message. n Before the coordinator sends Commit to the participants, it writes a commit record in the log. n If the coordinator sends rollback to the participants, it writes a rollback to the log.

Distributed Txn Management, 2003Lecture 3 / What does a participant write to a log? n If a participant votes Yes, it writes a yes record to its log before sending yes to the coordinator. This log record contains the name of the coordinator and the list of the participants. n If this participant votes No, it writes a rollback record to the log and then sends the No vote to the coordinator. n After receiving Commit (or Rollback, a participant writes a commit (or a rollback record into the log.

Distributed Txn Management, 2003Lecture 3 / How to recover using the log? n If the log of S contains a start-2PC, record then S was the host of the coordinator. If it also contains a commit or rollback record, then the coordinator had decided before the failure occurred. If neither record is found in the log then the coordinator can now unilaterally decide to Rollback by inserting a rollback record in the log.

Distributed Txn Management, 2003Lecture 3 / How to recover using the log / 2 n If the log does not contain a start-2PC record, then S was the host of a participant. There are three cases to consider: n (1) The log contains a commit or rollback record. Then the participant had reached its decision before failure. n (2) The log does not contain a yes record. Then either the participant failed before voting or voted No (but did not write a rollback record before failing.) (This is why the Yes record must be written before Yes is sent.) -> Rollback

Distributed Txn Management, 2003Lecture 3 / How to recover using the log / 3 n (3) The log contains a yes but not commit or rollback record. Then the participant failed while in this uncertainty period. n It can try to reach a decision using the termination protocol. n A yes record includes the names of the coordinator and participants; these are needed for the termination protocol.

Distributed Txn Management, 2003Lecture 3 / Garbage collection n Eventually, it will become necessary to garbage collect the space in the log. There are two principles to adopt: n [GC1:] A site cannot delete log records of a txn, T, from its log until at least after the resource manager has processed the commit or the rollback for T. n [GC2:] At least one site must not delete the records of T from its log until that site has received messages indicating that the resource managers at all other sites have processed the commit or rollback of T.

Distributed Txn Management, 2003Lecture 3 / Blocking n We say that a participant is blocked, if it must await the repair of a fault before it is able to proceed. n A txn may remain unterminated, holding locks, for arbitrarily long periods of time at the blocked server. n Suppose that participant, S, fails whilst it is uncertain. When S recovers, it cannot reach a decision on its own. It must communicate with the other participants to find out what was decided. n We say that a commitment protocol is non-blocking if it permits termination without waiting for the recovery of failed servers.

Distributed Txn Management, 2003Lecture 3 / Two-Phase Commit Blocks (1) COMMIT sent (2) C crashes C (3) P1 crashes (4) P2 is blocked COMMIT P1P2

Distributed Txn Management, 2003Lecture 3 / Blocking Theorem n Theorem 1: If it is possible for the network to partition, then any Atomic Commit Protocol may cause processes to become blocked. n Sketch of proof. Assume two sites, A and B, and a protocol. Assume that Kth message M is the one after which commit decision can be made. Assume that A can commit having sent M to B, but not before. If network partitions just before the Kth message, B is blocked. (Does not know, if A sent it and committed, or did not send it, in which case it might have decided abort.)